Join or Sign in

Register for your free asmag.com membership or if you are already a member,
sign in using your preferred method below.

To check your latest product inquiries, manage newsletter preference, update personal / company profile, or download member-exclusive reports, log in to your account now!
Login asmag.comMember Registration
https://www.asmag.com/showpost/34324.aspx
INSIGHTS

ONVIF chair shares insights on GenAI benefits for security

ONVIF chair shares insights on GenAI benefits for security
Generative AI has become a hot topic across the globe. To take a further look at generative AI in security, asmag.com asked Leo Levit, Chairman of the ONVIF Steering Committee, to share his views.
Generative AI has become a hot topic across the globe. As a security publication, we’re interested in how physical security can benefit from generative AI. To that end, asmag.com asked Leo Levit, Chairman of the ONVIF Steering Committee, to share his insights on how the technology can play a useful role in security.
 
Generative AI is a branch of AI that can create new content – be it text, images, video, audio or software code – based on user inquiries. Generative AI technologies such as ChatGPT has gained momentum worldwide. Then, before the Chinese New Year, the news that China-developed DeepSeek can produce the same results using lower-threshold technologies jolted the global tech community, raising the prospect that generative AI will become even more widespread and popular.
 
We’ve already covered to some extent generative AI applications in security. Generative AI applications in access control, for example, can further strengthen security by simulating potential bypass attempts and making biometrics more accurate. Tools such as ChatGPT can be useful helping security users conduct product comparison and cost-benefit analysis. To take a further look at generative AI in security, we asked Levit to share his views, which are detailed as follows.
 
Q: Can you explain how generative AI is currently being applied in physical security? What specific problems in physical security can generative AI help solve that regular AI can’t?
 
Levit: Some models that are specific for certain vertical markets, for example in healthcare or education, may suffer from a lack of readily available training data for the AI models because of privacy issues in these settings. Generative AI can be used to create this data – synthetic images of patients and employees in a hospital or small children in a school for people detection – without risk of violating privacy rules such as GDPR or other regional privacy regulations as the images of people who are synthesized are not tied to an individual identity.
 
Q: How can generative AI be used to enhance physical security? Especially, how can generative AI help video security systems better detect abnormalities and irregularities, compared to traditional AI?
 
Levit: Generative AI can be used to produce synthetic training data for uses where there may be a lack of available examples due to privacy concerns. In general, AI is making a security installation much less rigid, in terms of the technical parameters of a camera installation, how the camera should be mounted to optimize the analytics for license plate recognition, for example. Now AI models can detect and recognize images with a much higher performance, and the accuracy of the algorithms are much higher. This also helps to significantly reduce false positives and false negatives, which can contribute significantly to the overall cost of a security system. AI can also greatly expand the range of the object types that can be detected, making it more useful in more niche markets or specialty applications.

Related article: Generative AI in security: How video surveillance can benefit from it
 
Q: Can generative AI do a good job detecting deep fake video content?
 
Levit: It’s very similar to cybersecurity – your tools to defend and fortify your systems need to be as good as the tools used by hackers to breach these systems. As generative AI technology continues to advance, creating more sophisticated tools to manipulate video, so too must the tools advance to be able to detect it. At ONVIF, our approach is not to be able to detect the use of generative AI in an image but to be able to certify that the image is authentic using a digital signature generated by the specific camera. This sets a baseline of trust and authenticity for the video regardless of Gen AI tools to alter video or spot the specific video tampering techniques.
 
Q: What are the major challenges in implementing generative AI in physical security systems?
 
Levit: Many of the primary challenges are not technical but can be attributed more to concerns about privacy, ethics and transparency. End user organizations are weighing these considerations with the potential benefits. For example, The European AI Act does not prevent you from using AI but it requires transparency in the intent and who might be impacted by the use of AI.
 
On the technical side, we are still grappling with issues of the computing power required for edge installations that need to run deep learning models, vs using the wider resources available in the cloud.
 
Q: How do you see generative AI in security in the near term? Will it gain wide acceptance in security?
 
Levit: I think generative AI is already here and will be widely used once we can come to terms with some of the fears about this technology. We can’t stop innovating because of these fears but we also need to put in place best practices and regulations to stipulate the use of these types of technologies. Transparency in these areas and continuing education on the use cases for AI are crucial elements of overall acceptance in security and beyond.


Product Adopted:
Other
Subscribe to Newsletter
Stay updated with the latest trends and technologies in physical security

Share to: