Smarter surveillance: how AI agents are redefining video monitoring
Date: 2025/04/14
Source: Prasanth Aby Thomas, Consultant Editor
As artificial intelligence agents become increasingly capable, video surveillance systems are entering a transformative era - one marked by intelligent automation, contextual decision-making, and enhanced operator support. These systems are no longer limited to executing static, pre-set rules.
Instead, they are evolving into responsive entities that can dynamically assess and act on real-time information, thereby redefining the role of human operators and the overall efficiency of security operations.
At the heart of this transformation is the concept of autonomy - how much decision-making authority can and should be delegated to machines. According to Florian Matusek, Director of AI Strategy and Managing Director of Genetec Vienna, this question is fundamental to the future of AI in video surveillance.
“AI agents can now automate more tasks by dynamically responding to situations without relying on pre-defined rules,” said Matusek. “Instead of asking how much human intervention will be necessary, we should consider how much should be required.”
This subtle yet important shift in framing reflects a broader trend within the industry: the move from reactive systems to proactive and predictive capabilities. Today’s video management systems (VMS) must operate in complex, mission-critical environments - from airports and power plants to city surveillance networks - where incorrect or delayed decisions can have severe consequences.
“VMS systems are deployed in critical locations where wrong decisions can have a big impact,” Matusek warned. “This is why humans should always be kept in the loop for critical decision-making so that final judgments are made with human oversight. AI systems should augment the users' abilities, not replace them.”
From analytics to autonomy
At Milestone Systems, this transformation is viewed through the lens of what Chief Technology Officer Rahul Yadav calls “Action Quotient,” or AQ. This is a metric of how intelligently and autonomously a system can respond to stimuli - mirroring the way autonomous vehicles interpret and act on ever-changing road conditions.
“This shift represents what we call Action Quotient, or AQ, which is the power to act intelligently and autonomously, similar to how Tesla's self-driving cars don't just process road conditions but navigate complex traffic scenarios in real time,” Yadav explained.
In the context of video security, AQ translates into the ability of AI agents to detect anomalies, identify security threats, coordinate appropriate responses, and even predict future incidents based on patterns and historical data. According to Yadav, these systems improve continually, learning from every incident they process.
“AI agents can handle routine monitoring, identify threats, coordinate responses, and predict incidents,” he said. “Their value comes from learning from each incident and improving over time, creating increasingly effective security operations.”
Despite these advances, both experts agree on one non-negotiable principle: human oversight remains indispensable. Technology, no matter how sophisticated, cannot entirely substitute for human judgment - especially in unpredictable or ethically sensitive scenarios.
“The most effective security operations combine technology and human expertise,” Yadav said. “Human operators excel at understanding context, making nuanced judgments, and handling unexpected situations. The key is finding the right balance where technology handles predictable scenarios while humans focus on situations requiring judgment and empathy.”
Privacy and bias: ethical frontiers of AI
As AI systems take on greater autonomy, they inevitably encounter ethical and regulatory challenges - particularly around data privacy, algorithmic fairness, and responsible usage. In video surveillance, where systems are constantly capturing and processing sensitive personal information, the stakes are particularly high.
Matusek cautioned against the risks of blindly trusting data without rigorous vetting and user consent. “AI systems are only as good as the data they have been fed,” he said. “Biased data sets lead to biased decisions and should be avoided.”
He emphasized that organizations developing AI-based video analytics must be diligent about data quality and transparency. “Data within data sets needs to be vetted and customer data should only be used after their explicit consent. This is why, whenever developing AI systems, responsible AI guidelines should be followed.”
Yadav echoed these concerns and framed them as both a moral obligation and a strategic advantage. “Responsible technology development has become a crucial competitive advantage,” he said. “Organizations must prioritize ethical frameworks that protect privacy while enabling innovation and build trust with users who select security partners based on their ethical track record.”
To operationalize this, Milestone is working on building robust governance structures that dictate how data - especially video data - is collected, stored, processed, and used in training machine learning models.
“Privacy considerations are paramount in video security where sensitive information is constantly captured,” Yadav noted. “VMS companies must develop clear governance frameworks for data usage, especially when training AI models. We're exploring how to leverage video data ethically, creating systems trained on responsibly sourced data.”
The issue of bias remains one of the most persistent - and potentially dangerous - challenges in AI development. Biased training data can lead to systems that unfairly target or overlook certain demographic groups, introducing risks of both over-policing and under-detection.
“Bias presents another critical challenge,” Yadav said. “AI systems learn from their training data, and any biases will be reflected in the resulting systems. Great AI requires not just abundant data but ethically sourced, diverse data that covers the full spectrum of scenarios without unfairly favoring certain groups or situations.”
Implications for integrators and end users
For systems integrators, consultants, and end-users, the evolution of AI in video surveillance offers both new opportunities and new responsibilities. On the one hand, AI-powered automation promises to improve operational efficiency, reduce the number of false alarms, and enable faster response times.
On the other, it requires a deeper understanding of how these systems work - and what ethical standards they should be held to.
The transition also necessitates a shift in training. Security personnel must be educated not just on how to use AI tools, but on how to supervise them. Integrators need to know how to assess AI offerings for bias, privacy compliance, and operational transparency.
Moreover, buyers are becoming more discerning. Organizations increasingly seek partners who can offer not only technical performance but also a clear commitment to responsible AI development. As Yadav put it, “Users select security partners based on their ethical track record.”
Looking ahead
Both Genetec and Milestone agree that the path forward lies in partnership - between human intelligence and machine learning, between innovation and governance. AI will not replace the human element in video security; instead, it will redefine and elevate it.
“AI systems should augment the users' abilities, not replace them,” said Matusek. This sentiment may well define the next generation of video surveillance: smart, ethical, and above all, human-centric.
As AI agents continue to mature, the challenge for the security industry will be to embrace this potential responsibly - ensuring that smarter surveillance doesn’t come at the cost of trust, fairness, or accountability.