ISC West interview: Ambarella debuts next-generation edge GenAI technology

Date: 2025/04/22
Source: asmag.com Editorial Team
Ambarella, an edge AI semiconductor company, demonstrated at ISC West 2025 what is possible with generative AI at the edge. As a leading supplier of edge AI systems-on-chip (SoCs), Ambarella recently achieved the milestone of 30 million cumulative units shipped. At ISC West, the company is reinforcing that business and technology leadership with live demonstrations of its latest cutting-edge GenAI and vision AI capabilities.
 
The new demonstrations highlighted Ambarella’s ability to enable scalable, high-performance reasoning and vision AI applications across its ultra-efficient, edge-inference CVflow 3.0 AI SoC portfolio, which now supports most of the leading GenAI models from 0.5 to 34 billion parameters.
 
“So all of edge AI is now covered with generative AI,” said Amit Badlani, Director of Product for AI/ML at Ambarella as he demoed their solutions to asmag.com. “The way we are using generative AI is to generate text-based and automating the workflows based on whatever visual data you get. For example, we take an eight second video, and it provides you a summary of the eight second video. The image shows a person holding an Amazon package in a room with individuals present, with generative AI detailing the indoor event space, carpeted floor, pattern, design and artificial lighting.”
 
In particular, the company debuted live demonstrations of the DeepSeek GenAI models running on three different price/performance levels of its SoC portfolio. By doing these demos, Ambarella is showing how it’s pushing the boundaries of real-time, AI-powered security and analytics by running state-of-the-art vision language models (VLMs), for both on-device and centralized on-premise AI hub applications with exceptional multimodal video intelligence.
 
These demonstrations further illustrate how Ambarella is bringing advanced reasoning capabilities to real-world applications without requiring cloud processing. Additionally, the scalable AI performance across its large portfolio of edge AI SoCs ensures that customers can deploy the same AI models across different product tiers, from high-performance computing to ultra-low-power inference.
 
“We are an ASIC (application-specific integrated circuit) maker. Our chips are purpose made to process video and AI. At the end of the day, the result is you can recall video and do AI at lower power and higher performance than you would with a general purpose GPU or CPU,” said Jerome Gigot, VP of Marketing for Edge AI Products at Ambarella. “Typically we have a power budget … can be from three watt to five watt type thing … and the goal that we have is really to deliver as high video quality, as much encoding and as much AI as we can right inside this kind of power envelope. That's kind of our design philosophy when we do chips.”
 
Another area of focus for Ambarella is on making it easy for edge AI developers to get started. As the latest example of those investments, the company demonstrated at ISC West its complete AI Model Garden — a vital and growing component of its Cooper Developer Platform. Several companies from Ambarella’s developer ecosystem also provided hardware and software demonstrations of what can be achieved with Ambarella’s SoCs, by taking full advantage of their AI performance per watt.
 
“This is Cooper home, our platform. So Cooper basically stands for cooperation. It's a short for cooperation. It's basically our name branding for developer platform. So customers and developers, you know, teams can actually use this to develop applications. So we have some of the applications we have developed. So these applications that you see we developed it, but we encourage our partners to also develop other stuff,” Badlani said.
 
The following were additional highlights from some of Ambarella’s key demonstrations at ISC West:
 
•    DeepSeek 3-in-1 GenAI Reasoning: This demonstration runs the DeepSeek R1 QWen 1.5B model on the CV7 SoC family and DeepSeek R1 QWen 7B on the N1 SoC family, showcasing Ambarella’s seamless scalability for processing reasoning models across its CVflow 3.0 edge AI SoC portfolio.
 
•    Multi-stream, Multi-channel Video Decoding with Visual Analytics Powered by CLIP & LLaVA One-Vision Models on Cooper Kits: As an example of on-premise centralized AI processing, this set of AI-box demonstrations run real-time CLIP models on multiple video streams in parallel, as well as SOTA VLMs, which enable in-depth video analysis via a chat-based interface, allowing users to query specific insights on any of the multiple streams.
 
•    On-Device Generative AI in a Camera, Along with Deeper Insights in an AI Box: A multi-agent, multi-chip demonstration supporting VLMs and reasoning models with up to 1.5B parameters on-device, as well as deeper insights locally on an AI box, providing visual insights and event alerts in real-time without needing the cloud, thereby preserving privacy and keeping TCOs lower.
Related Articles
i-PRO to feature advanced edge-processing and next-gen video protection at ISC West 2025
i-PRO shares key predictions for the physical security market in 2025
Multisensor AI cameras: How to get the best results