Exclusive: Seeing eye to AI with smart video

Share this content

Facebook
Twitter
LinkedIn

Brian Mallari, Director of Product Marketing, Smart Video, Western Digital reveals how smart video solutions are shaping the edge

The evolution of smart video technology continues at pace. As in many other industries, the onset of the pandemic expedited timelines and the artificial intelligence (AI) video world is set to continue its rapid evolution in 2021.

As AI and 4K rise in adoption on smart video cameras, higher video resolutions are driving the demand for more data to be stored on-camera. There are many more types of cameras being used today, such as body cameras, dashboard cameras and new Internet of Things (IoT) devices and sensors. Video data is so rich nowadays, you can analyse it and deduce a lot of valuable information in real-time, instead of post-event.

However, while we often limit real-time video analytics to the context of security or surveillance activity, the market is expanding through a growing amount of use cases. These include medical applications, sports analysis, smart factory, traffic management and even agricultural drones.

According to Omdia Research, over 116 million network cameras were shipped in the professional surveillance market in 2019, with the capability to generate almost 9 petabytes of video each and every day. That’s a lot of cameras and even more data. As video demand and the use of AI increases, these numbers will only continue to grow and it’s forcing the creation of new edge architectures.

The view from the computer

If there’s one thing we’ve learned about AI in the last few years, it’s that it excels at completing narrow tasks. Computer vision isn’t necessarily about teaching computers to see the world as us humans would. Instead, it’s about allowing computers to capture, analyse and learn about the human world. Value can be found in AI when computer intelligence capabilities, such as object recognition, movement detection and tracking, are being used within the right applications. It makes sense that the amalgamation of video, artificial intelligence and sensor data is a hotbed for new services across various industries.

A new generation of “smart” use cases has developed. For example, in “Smart Cities” cameras and AI analyse traffic patterns and adjust traffic lights to improve vehicle flow, reduce congestion and pollution and increase pedestrian safety. “Smart factories” can leverage AI to detect flaws or deviations in the production line in real-time, adjusting to reduce errors and implement effective quality assurance measures. As a result, costs can be greatly reduced through automation and earlier fault detection.

Architecting the edge

When it comes to smart video, we are now seeing a trend of data being processed at the edge and there is one main reason for this change in preference: latency.

Latency is an important consideration when trying to carry out real-time pattern recognition. It’s very difficult for cameras to process data – 4K security video recorded 24/7 – if it has to go back to a centralised data centre hundreds of miles away. This data analysis needs to happen quickly in order to be timely and applicable to dynamic situations, such as public safety. By storing relevant data at the edge, AI inferencing can happen much faster.

The evolution of smart video is also happening alongside other technological and data infrastructure advancements, such as 5G. As these technologies come together, they are impacting how we architect the edge. And, they are driving a demand for specialised storage. Here are some of the biggest trends we’re seeing:

1. Greater volume means greater quality

The volume and variety of cameras continues to increase with each new advancement bringing new capabilities. Having more cameras allows more to be seen and captured. This could mean having more coverage or more angles. It also means more real-time video can be captured and used to train AI. 

Alongside the strength in-depth, quality continues to improve with higher resolutions (4K video and above). This is important because video is rich media. The more detailed the video, the more insights can be extracted from it. And, the more effective the AI algorithms can become. In addition, new cameras transmit not just a main video stream but also additional low-bitrate streams used for low-bandwidth monitoring and AI pattern matching.

Whether for traffic, security or manufacturing, many of these smart cameras operate 24/7, 365 days a year, which poses a unique challenge. Storage technology has to be able to keep up. For one thing, storage has evolved to deliver high performance data transfer speeds and data writing speeds, to ensure high quality video capture. But, actual on-camera storage technology that can deliver longevity and reliability, has become even more critical.

Our own WD Purple drives are an example of this adaptation to ‘always on’ systems, having been engineered specifically for the extreme demands of high temperature, 24/7 security video systems.

2. Real world context is vital to understanding endpoints

The recent changes in our world are shifting the needs of video security and cameras are having to adapt with them. For example, the pandemic has given rise to thermal cameras that help identify people with a fever and we have also seen explosion-proof cameras implemented in high environmental risk areas.

Whether for business, scientific research or simply our personal lives – we’re seeing new types of cameras that can capture new types of data. With the potential benefits of utilising and analysing this data, the importance of reliable data storage has never been more apparent.

As we design storage technology, we must take the context into consideration, such as location and form factor. We need to think of the accessibility of cameras (or lack thereof) – are they atop a tall building? Maybe amid a remote jungle? Such locations might also need to withstand extreme temperature variations. All of these possibilities need to be taken into account to ensure long-lasting, reliable continuous recording of critical video data. 

3. Chipsets are improving AI capability

Improved compute capabilities in cameras means processing happens at the device level, enabling real-time decisions at the edge. We’re seeing new chipsets arrive for cameras that deliver improved AI capability and more advanced chipsets add deep neural network processing for on-camera deep learning analytics. AI keeps getting smarter and more capable. 

According to Omdia Research, shipments of cameras with embedded deep-learning analytics capability will grow at a rate of 64% annually in the next three years. This reflects not only the innovation happening within cameras but also the expectation that deep learning – which requires large video data sets to be effective – will happen on-camera too, driving the need for more primary on-camera storage. 

Even solutions that employ standard security cameras, AI-enhanced chipsets and discrete GPUs are being utilised in network video recorders (NVR), video analytics appliances and edge gateways to enable advanced AI functions and deep learning analytics. NVR firmware and OS architecture are evolving to add such capabilities to mainstream recorders and so storage must also evolve to handle the changing workload that results.

One of the biggest changes is that there is a need to go beyond just storing single and multiple camera streams. Today, metadata from real-time AI and reference data for pattern matching needs to be stored as well. This has greatly altered the workload dynamic and how we tailor storage devices for new types of workloads.

4. The cloud must support deep learning tech

Just as camera and recorder chipsets are coming with more compute power, in today’s smart video solutions most of the video analytics and deep learning is still done with discrete video analytics appliances or in the cloud. That’s where big data resides. Broader Internet of Things (IoT) applications that use sensor data beyond video are also tapping into the power of the deep learning cloud to create more effective, smarter AI.

To support these new AI workloads, the cloud has gone through some transformation. Neural network processors within the cloud have adopted the use of massive GPU clusters or custom FPGAs. They are being fed thousands of hours of training video and petabytes of data. These workloads depend on the high-capacity capabilities of enterprise-class hard drives (HDDs) – which can already support 20TB per drive – and high-performance enterprise SSD flash devices, platforms or arrays. 

5. Reliance on the Network

Wired and wireless internet have enabled the scalability and ease of installation that has fuelled the explosive adoption of security cameras – but it could only do so where LAN and WAN infrastructures already exist.

5G removes many barriers to deployment, allowing expansive options for placement and ease of installation of cameras at a metropolitan level. With this ease of deployment comes new greater scalability, which drives use cases and further advancements in both camera and cloud design. 

For example, cameras can now be stand-alone, with direct connectivity to a centralised cloud – they are no longer dependent on a local network. Emerging cameras that are 5G-ready are being designed to load and run 3rd party applications that can bring broader capabilities. Really, the sky’s the limit on smart video innovation brought about by 5G. 

Yet with greater autonomy, these cameras will need even more dynamic storage. They will require new combinations of endurance, capacity, performance and power efficiency to be able to optimally handle the variability of new app-driven functions and Western Digital is designing solutions with these new capabilities in mind.

Paving the way for the edge storage revolution

Smart security today is about utilising AI and edge computing, to deliver an always-on, high-resolution video provision that can help keep people safe 24/7. These trends increase the demands and importance of monitoring, which means requirements of the supporting data infrastructure improve to match that, including the ability to proactively manage the infrastructure to help ensure reliable operation.

It’s a brave new world for smart video and it is as complex as it is exciting. Architectural changes are being made to handle new workloads and prepare for even more dynamic capabilities at the edge and at endpoints. At the same time deep learning analytics continue to evolve at the back end and cloud.

Understanding workload changes – whether at the camera, recorder or cloud level – is critical to ensuring that new architectural changes are augmented by continuous innovation in storage technology. That’s why Western Digital continues to optimise, tune and if needed, even re-architect our storage firmware and interface technology to guarantee storage technology can keep up with growing demands of smart video and offer new capabilities.

Our strong history of innovation, going back to the origins of both hard disk drive technology and flash technology, helps businesses to be at the cutting edge of data storage. We work closely with market and innovation leaders in smart video to develop a deep understanding of current and future advanced AI-enabled architectures, as well as how the changes in video and metadata stream management affect the workload on storage devices.

For more information, visit: www.westerndigital.com

This article was originally published in the February 2021 edition of International Security Journal. Pick up your FREE digital copy here

Newsletter
Receive the latest breaking news straight to your inbox