7 critical challenges of integrating AI agents into VMS

Date: 2025/04/09
Source: Prasanth Aby Thomas, Consultant Editor
The vision of AI agents revolutionizing video management systems (VMS) is captivating: systems that don’t just record and alert but perceive, understand, and act - creating a dynamic, intelligent layer in modern security infrastructure.
 
But behind the promise lies a complex path. For VMS companies, integrating AI agents into their platforms is not just a technical leap - it’s an organizational, ethical, and architectural transformation. As vendors move from traditional logic-based programming to adaptive, autonomous agents, they face a host of challenges that require careful navigation.
 

1. Balancing innovation with responsibility 

For Rahul Yadav, Chief Technology Officer at Milestone Systems, the first challenge is not technical - it’s philosophical.
 
“The primary challenge is balancing innovation with responsibility,” Yadav said. “As systems become more autonomous, maintaining ethical standards and user trust becomes critical.”
 
In an era where security platforms may be empowered to make decisions without human input - locking doors, redirecting security personnel, or escalating incidents—questions about accountability, transparency, and control become unavoidable.
 
“Organizations select security partners based on their track record of responsible innovation,” Yadav continued. “Just as you wouldn't trust a self-driving car from a company with a questionable reputation.”
 
For VMS vendors, building that trust means implementing explainable AI, offering override options, and being transparent about how agents make decisions. It also means ongoing dialogue with customers and regulators to align system behavior with ethical and legal expectations.
 

2. The data dilemma: quality over quantity

 
The next challenge is foundational: data.
 
“Great AI tools require great data,” Yadav noted. “Companies with robust data infrastructure will be able to accelerate their AI initiatives, while those lacking quality data risk falling behind.”
 
AI agents are only as good as the data used to train them. If the data is incomplete, biased, or poorly labeled, the agent’s performance will suffer—leading to misjudged contexts, false positives, or, worse, missed threats.
 
This creates a dual task for VMS companies. First, they must build or partner for access to rich, diverse, and ethically sourced datasets. Second, they must develop frameworks for ethical data usage, ensuring compliance with regulations like GDPR and emerging AI acts in the EU and elsewhere.

3. Infrastructure overhaul: from CPU to GPU

From a technical standpoint, the shift to AI agents demands a fundamental change in how video surveillance systems are architected.
 
“We're moving from traditional CPU processing to GPU-focused architectures that require completely different approaches to system design,” Yadav explained. “This shift represents a substantial investment hurdle.”
 
Unlike traditional video analytics, which might process a few frames per second to detect motion or line-crossing, AI agents require real-time analysis of massive video streams and contextual data. That means high-performance GPU hardware—something that doesn’t come cheap.
 
“Even mid-sized security operations need to allocate $200,000 to $300,000 for adequate GPU hardware to enable advanced AI capabilities,” Yadav added.

However, the opportunity lies in how companies approach this transition. Modular, scalable solutions and hybrid edge-cloud architectures could help reduce upfront costs while maintaining flexibility. 

4. Solving the right problems

For Florian Matusek, Director of AI Strategy and Managing Director of Genetec Vienna, the starting point must always be the use case.
 
“The first question to ask is which problem AI agents will solve,” Matusek said. “They shouldn't be implemented for their own sake but to address a specific need.”
 
AI agents should not be viewed as one-size-fits-all solutions. Instead, vendors must work with end users to identify pain points—whether it’s managing alerts during off-hours, identifying anomalies in crowd behavior, or optimizing access control across multiple sites.
 
Once the need is defined, companies face the “make-or-buy” decision. Should they develop AI agents in-house or integrate off-the-shelf models?
 
“If you choose to use agents, you must answer the make-or-buy question,” Matusek said. “Should you use off-the-shelf models or train them for a specific use case?”
 
Each path has trade-offs. Off-the-shelf models can be quicker to deploy but may lack domain specificity. Custom-trained models offer better performance in niche environments but require time, data, and AI talent.

5. Multi-agent coordination: who’s in charge?

Another emerging question is how AI agents interact with one another. In a typical surveillance setup, multiple agents may be deployed - one focused on perimeter breaches, another on crowd density, and yet another on abnormal behavior.
 
Matusek asks, “Should there be a master agent managing specialized agents?”
This “orchestration layer” becomes essential when AI agents work in tandem. Without a hierarchical or cooperative structure, agents may act at cross purposes - escalating incidents redundantly or missing the bigger picture.
 
This brings us back to architectural complexity. VMS platforms must evolve to support not just isolated AI features but intelligent ecosystems where agents collaborate under centralized logic.

6. The trust gap: human-in-the-loop or not?

Autonomy may be the buzzword, but trust is the currency.
 
Security operators are used to being in control. The idea of agents making independent decisions can be unsettling - especially in critical infrastructure, healthcare, or government applications.
 
One solution gaining ground is the human-in-the-loop approach, where agents recommend or initiate actions, but humans retain final approval.
 
Over time, as confidence in the agents grows and false positives decrease, more control can be ceded. But this transition must be gradual, transparent, and measurable. 

7. Regulatory readiness

As AI regulation tightens globally, VMS vendors face an additional burden - compliance.

The EU AI Act, for instance, classifies AI systems used in surveillance as high-risk, demanding transparency, risk management, and documentation. In regions like the Middle East and Asia, governments are beginning to follow suit with their own data and AI policies.
 
This means VMS companies must build AI agents with compliance in mind from day one - considering not only how they function but how they’re audited and explained.

A transformational opportunity - if navigated right 

Despite the roadblocks, both Yadav and Matusek agree that the rewards of successful integration far outweigh the challenges.
 
When done right, AI agents can create intelligent environments where security operations are predictive, not reactive.
 
And for integrators, this evolution presents new business opportunities - from consulting on AI infrastructure to deploying and managing AI-powered VMS platforms.

But it will also demand upskilling. Understanding AI agents, training data, and ethical considerations will become as critical to a security integrator’s toolkit as knowing camera placement or networking basics.
 
Related Articles
The rise of AI agents: transforming VMS from passive recorder to proactive partner
India to enforce stricter CCTV regulations from April 2025
AI-powered video analytics and access control in mixed-use developments