In the future, AI at the edge will be realized broadly in the area of autonomous vehicles and security cameras, according to a whitepaper published by technology research firm Tractica.In the future, AI at the edge will be realized broadly in the area of autonomous vehicles and security cameras, according to a whitepaper published by technology research firm Tractica.
“The automotive sector is one of the leading adopters of on-device AI processing,” says the whitepaper How Will 5G + AI Transform the Wireless Edge. It is critical for cars to have local, embedded AI inference for immediate actions and critical decisions.
Cloud-based inference is not feasible because it requires ultra-low latency network for data transmission. The problem is that this kind of network is more costly and installing it “on a wide scale across all roads and highways” is not feasible, says the report.
Tractica points out that warnings of cars around blind spots or passing cars in an unexpected lane, as well as in-vehicle camera feed will be communicated as sensor inputs. “They will be processed by on-device AI to make critical decisions,” says the report.
In addition, on-device AI can enhance a car’s safety features as well as the driver assistance system. Animal detection, pedestrian detection and brake assistance are among the capabilities that can be enabled by in-vehicle AI.
Edge AI will also assist with in-vehicle facial recognition, voice assistant, personalized passenger settings and AR heads-up displays.
The use of in-vehicle cameras is set to increase and will contribute to more adoption of deep learning and embedded AI hardware in the car industry moving forward, Tractica points out.
Security camera application
It also makes senses to apply edge AI on security cameras. With technological advancement, an increasing number of cameras are doing object recognition, facial recognition or even emotion recognition.
As a result, much image content needs to be processed, stored and transferred. It can be expensive if everything is done through the cloud. However, it is not the case with an edge solution. “With on-device AI processing, cameras can filter out interesting bits of content and pass it over to the cloud for processing, rather than transporting the full camera feed,” Tractica says.
Qualcomm Technologies and Microsoft have signed up to this vision. The two companies are working together to create solutions like home monitoring cameras, enterprise security cameras and smart home devices that use on-device vision AI in retail, manufacturing, logistics applications and more, according to the report.
Something similar is also happening at China-based Horizon Robotics, which is providing security camera makers with a vision processor that can process AI locally on the device.
Many robots today use cloud-based AI for many tasks. However, with cloud processing, a household robot takes a few extra second to respond to a query or slightly longer to recognize an individual as he or she enters the house. On-device processing, on the other hand, could minimize the delay on response and enhance the user experience. Tractica says.
Privacy is another key factor for moving AI to the edge, especially for consumer robots, Tractica says. “These robots are collecting sensitive user information, including household objects, home layout, photos, videos, and speech and voice patterns of children and adults in the house.”
Data privacy and security are critical for enterprise robots too, from logistics, warehouse, to agriculture and customer service applications.
All in all, it is better to process the sensitive data on edge devices, whether in home or business robots.
Nonetheless, cloud and edge devices both have their unique advantages, and the two will complement each other and frequently used in combination, since there are tradeoffs between performance, latency, power, cost, etc., says Tractica.