Edge analytics is definitely not a new concept, but up until now it has been limited by technological constraints.
Edge analytics is definitely not a new concept, but up until now it has been limited by technological constraints. That is changing as processing power becomes better, the Internet of Things (IoT) becomes increasingly prolific, and deep learning algorithms meld into video analytics. Now, edge analytics is growing, not only in application but in accuracy and efficiency too. By shifting analytics processing from backend servers and moving them into the cameras themselves, edge analytics is able to provide end users with more accurate, efficient real-time data analysis.
The edge analytics market is forecast to grow from US$1.9 billion in 2016 to nearly $8 billion by 2021, at a CAGR of 32.6 percent, according to a report by Research and Markets. The report cited the advent of IoT, proliferation of massive amounts of data through connected devices, and increased adoption of edge analytics due to its scalability and cost optimizations as growth drivers for the market. Additionally, an increase in government initiatives for the IoT and cloud technologies will also result in higher adoption of edge analytics.
Regionally, APAC is estimated to grow at the highest CAGR during the forecast period. The report attributes this growth to “a tremendous demand for deployment of edge analytics technologies with advanced analytics solutions that provides comprehensive support and specialty in real-time access of data, facilitating enterprises to comprehend business scenario, and take quicker and faster decisions.”
Growth aside, it is the new developments in edge analytics that are driving adoption in smart cameras. Developments in deep learning algorithms and processing power are not only driving growth, but improving results.
Embracing deep learning
Video analytics as a whole has improved tremendously since its inception and slowly analytics has been moving toward the edge. Now that it’s there, the security industry is applying the latest in technology developments to edge analytics in smart cameras.
The most important of these developments is deep learning, a branch of machine learning. By applying deep learning algorithms to edge analytics for security, the results are more efficient and more accurate. In fact, a report by IDC predicts that all effective IoT efforts will merge streaming analytics with machine learning trained on data lakes, marts and content stores, accelerated by discrete or integrated processors by 2019.
Industry players have also pointed to this trend. “The latest intelligence development is deep learning. The traditional intelligent algorithm still has many flaws, especially in accuracy and false alarm,” said Shell Guo, Product Marketing Manager at
Hikvision Digital Technology.
Daniel Chau, Overseas Marketing Director at
Dahua Technology stated, “Edge analysis in security surveillance cameras is currently undergoing a transition from traditional smart algorithms to deep learning algorithms.” He explained, “Using the latest artificial intelligence technology, algorithms integrated into frontend cameras can extract data from human, vehicle and object targets for recognition and incident detection purposes.”
Remi El-Ouazzane, VP of New Technology Group and GM of Movidius at Intel, echoed a similar opinion: “From our point of view, the biggest developments are twofold: introduction of deep neural networks as a way to significantly improve accuracy of video analytics algorithms, and the increasing trend of moving the compute required for these algorithms to the edge through dedicated vision processing units (VPUs).”
By applying deep learning algorithms to edge analytics, devices could be taught to better filter unnecessary data, which in a world of big data could save time, money and manpower.
Challenges with processing power answered
One of the major challenges of edge analytics is processing power. Successful application of edge analytics hinges on whether or not a low-power, high-performance computing platform can be integrated into the camera, according to Chau.
El-Ouazzane stressed the importance of being able to deliver more processing power, within a small power envelope, in order to deploy the new wave of deep neural network-based algorithms.
“We’ve determined that while deep neural networks have shown massive gains in accuracy over traditional approaches, those gains come at a cost in terms of power. Deep neural networks might get you from 90-percent to 99-percent accuracy, but it might cost you in terms of power consumption,” El-Ouazzane said.
As a way to improve performance, Intel’s Movidius group has been developing its VPUs to be able to run these classes of algorithms as efficiently as possible. “Through optimized tensor libraries running on the Myriad 2 VPU, we can deliver an unmatched level of compute for deep neural networks and other vision algorithms, all the while remaining under a 1W power budget.” Danny Petkevich, Director of Product Management at Qualcomm Technologies, also cited the ability to provide processing sufficient enough for robust and accurate analytics as a major challenge for edge analytics.
Qualcomm has developed a comprehensive suite of camera platforms featuring enhanced hardware and capabilities for on-camera deep learning and video analytics. Petkevich explained that the platform is based on Qualcomm’s Snapdragon 625, which is designed to deliver 40 percent more CPU DMIPs processing with GPU and DSP for advanced imaging and deep learning processes. Among some its features is its eight A53 CPU cores running at up to 2.2 GHz along with a GPU. “Other camera SoCs provide one or two CPU cores which limits the processing available for robust analytics.”
Ultimately, “Better accuracy and robustness due to better models, higher fps for tracking high speed objects, and higher resolution to see objects farther away,” according to Petkevich, are just some of the ways better processing power will continue to drive edge analytics.
Benefits of edge analytics
The benefits of edge analytics are many. From real-time analysis to better accuracy and more efficiency, edge analytics has a lot to offer.
Chau named a few of the main benefits of edge analytics as follows: “First, the smart recognition capabilities in a single camera is increased, and when combined with other sensing technologies, it surpasses the recognition capabilities of humans. Second, camera clustering enables data collision and cloud computing processing. Edge analytics mainly improves surveillance efficiency and reduces manpower requirements for users and also implements smart surveillance under human supervision.”
Chau explained edge analytics implements distributed structured video data processing, and takes each moment of recorded data from the camera and performs computations and analysis in real time. This can then be processed into structured natural language descriptions to be sent back to the backend storage and surveillance center. “This enables instant recognition, analysis and alarm triggering during emergency incidents which does not rely on backend servers. This also means that ultra-large scale video analysis and processing can be achieved for projects such as safe cities where tens of thousands of real-time smart surveillance cameras are involved.”
Reducing the work on backend servers also reduces bandwidth usage. Guo provided the following example: “For example, license plate recognition uses the frontend cameras to capture vehicle images, the image recognition algorithm is processed in the camera as well, only the data that conforms to the defined rules will be transmitted. If the massive raw data was initially sent to the backend server for processing, the requirement for internet bandwidth would be huge. Compared to edge analytics, the risk of the backend server for analytics being damaged is indeed less; however, once it happens, the aftermath is more serious. Edge analytics disperses such risk to different spots. Besides, edge analytics helps to save more transmission bandwidth consumption.”
El-Ouazzane added how bandwidth can be reduced by switching from raw transmission to only metadata, or more realistically temporal or region-specific encode. He added, “Networks will also become more robust, with decentralized points of failure. On-device intelligence will also yield a host of new autonomous applications in terms of PTZ (pan-tilitzoom) capabilities and networks of cameras being able to track events of interest as they move or disperse across locations.”
Slow and steady
If we’ve learned anything from video analytics in security we’ve learned that growth and adoption has followed the path of the turtle rather than the hare. Regardless, analytics has continued to grow and become an integral part of video surveillance. As technology continues to advance allowing for better accuracy and connectivity becomes unavoidable, edge analytics, with the help of deep learning, will continue to improve and maybe someday soon live up to the expectations that have long been promised.