Deep learning algorithms on the edge that can identify objects are all set to revolutionize the video surveillance industry.
Artificial intelligence continues to revolutionize the video surveillance industry, opening more and more opportunities for customers to gain more return on their investments. But
taking this to the next level is edge technology, which may prove to be an even more significant gamechanger for security and business intelligence purposes.
At present, most of the edge analytics is limited to detecting an object and its movement. It takes server-side video management software and human interpretation to determine what the object is and what it is doing. But with deep learning intelligent algorithms, cameras themselves can identify what an object is, what it is doing, and what action should be initiated.
The deep learning revolution
According to Andres Virgen, Global Product Manager at Axis Communications, running AI and deep learning algorithms at the camera level would be one of the primary benefits of edge technology. There are several advantages to this.
“The greater accuracy of edge analytics – and the ability to distinguish between multiple classes of object – immediately reduces the rate of false positives,” Virgen wrote in a blog post recently. “With that comes a related reduction in time and resources to investigate these false positives. More proactively, edge analytics can create a more appropriate and timely response.”
For instance, when AI-enabled cameras are used for traffic management, the analytics on edge can identify the objects captured on footage and inform drivers in real-time if necessary. This is taken to the next level when deep learning-based algorithms that can differentiate between different objects are used to ascertain the severity of an issue and adjust the warning levels accordingly. If a camera can identify a person on the road, it can alert the drivers of a need to slow down.
“Over time, developers behind analytics could see trends that would be of use not just for traffic management and planning but also for other agencies with an interest in wildlife behavior and conservation,” Virgen continued. “Being able to differentiate the type of traffic – pedestrians, cyclists, motorists, commercial vehicles – provides valuable trends insights that help civil engineers plan the smart cities of the future.”
These are not the only advantages. When AI is used at the camera level, the algorithms have access to the best quality footage. When the footage is transmitted to the server, the compression codecs inevitably hurt the quality, thereby limiting the information with which analytic software can work. Also, when analytics is used on the server-side, scaling up is an issue. When new cameras are added, servers need to accommodate more space to process additional footage. When analytics is run on the edge, there is no need for additional infrastructure on the server-side.
A necessity as cameras increase
The number of surveillance cameras installed worldwide continues to grow as people become more aware of the need and costs decrease. But managing the increasing number of cameras is a tedious task that requires tremendous processing power. Edge-computing
solves this problem.
“For a video surveillance network, this means more actions can be carried out on the cameras themselves,” Virgen says. “The role of artificial intelligence (AI), machine learning, and deep learning in video surveillance is growing, so we’re able to ‘teach’ our cameras to be far more intuitive about what they are filming and analyzing in real-time. For example, is the vehicle in the scene a car, a bus, or a truck? Is that a human or animal by the building? Are those shadows or an object in the road?”
Such insights would reduce the burden on servers and people, increasing efficiency and lowering costs. It would also increase the response time, which often is a critical factor when dealing with events like road accidents.