Join or Sign in

Register for your free asmag.com membership or if you are already a member,
sign in using your preferred method below.

To check your latest product inquiries, manage newsletter preference, update personal / company profile, or download member-exclusive reports, log in to your account now!
Login asmag.comMember Registration
https://www.asmag.com/project/cloud-hybrid-video-surveillance-guide/
INSIGHTS

Deepfakes are emerging threats. What’s being done to counter them?

Deepfakes are emerging threats. What’s being done to counter them?
Deepfake video has emerged as a new threat in video surveillance. End user organizations can suffer significantly if targeted by hostile actors using deepfakes. This article explains how some providers have already rolled out solutions that protect end users.
Deepfake video has emerged as a new threat in video surveillance. End user organizations can suffer significantly if targeted by hostile actors using deepfakes. This article explains how some providers have already rolled out solutions that protect end users.
 
For those using social media a lot, deepfake video should be familiar. These video clips may be world leaders singing together or celebrities doing weird things. These, of course, are not actually them, but deep learning-generated likenesses of them. Hence the name deepfake.
 
These video posts may be fun to watch. But deepfakes can also bring serious consequences to end user entities. Specifically, attackers can employ deepfake technology for blackmail or identity theft purposes.
 
“An attacker can use an ad-hoc made deepfake (video and/or audio) to blackmail a company associate to share confidential information or gain access to internal system or more. The deepfake material will have a comprising, intimate/sexual nature to be usable as leverage for blackmail,” said Giorgio Patrini, CEO and Chief Scientist at Sensity.
 
He added: “An attacker can target an online authentication system with deepfakes, and create a new account for online financial services, for example online banking. The attack starts from obtaining a passport, identifying the victim on social media or on other website online, where the target has shared video or photos including their face.”
 
“This material can then be used to train a system for deepfake faceswap in real time,” he continued. “Finally, the attacker goes on the online bank website and can open an new account on the name of the target by presenting the stolen passport, successfully bypassing biometrics liveness detection by using the real-time deepfake hijacking the webcam feed, or successfully bypassing face matching (between camera input and stolen passport photo).”
 

Video system being compromised

 
There is also the possibility the end user’s video surveillance system is compromised with deepfake video. “Malicious actors can even now weaponize organization’s own recorded footage against them. Every interaction and incident recorded by a video security camera on a site can now easily be altered, if integrity of that footage is not protected with the right technology and falls into the wrong hands,” said Peter Kim, Global Technical Consultant at IDIS.
 
With video credibility at stake, prosecutors and defense teams may find it harder to present their cases in courts.
 
“Many cases rely heavily on video footage from cellphones, body worn cameras, and dashcams, as well as surveillance systems. In some situations, the video reinforces the prosecution’s case, while in others it exonerates an innocent party,” Kim said. “All this value is potentially at risk from of being undermined by deepfakes.”
 

What anti-deepfake tools are there

 
If not dealt with, deepfakes can threaten the reputation, if not the very existence, of an end user entity. Fortunately, many suppliers have rolled out solutions to detect deepfakes and protect end users against them.
 
Some providers use AI-based software engines to detect AI-generated deepfakes. Estonia-based Sentinel, for example, has an AI-based platform modeled after cybersecurity’s standard of Defense in Depth (DiD). The user uploads the video in question to the platform, which then analyzes for AI-forgery and determines if it’s a deep fake or not.
 
IDIS video recorders, meanwhile, use IDIS’s Chained Fingerprint technology to ensure the integrity of the recorded and exported video data. If any part of the image frame is tampered with, the fingerprint chain will be broken and will not match the chain value calculated at the time of video export, prompting a flag.
 
“It works like this: each frame is assigned a unique ‘fingerprint,’ calculated by relating its own pixel value to the fingerprint of the previous frame. This means that every single image frame of the video is linked by an encryption ‘chain’ with its neighboring image frames,” Kim explained.
 
“The chained encryption value is stored as part of the video data when the video is recorded, or exported as a video clip using the IDIS ClipPlayer. Before playback, the ClipPlayer scans video and recalculates the fingerprint chains of the video data and compares the recalculated fingerprint value to the stored value to confirm if there has been a change,” he added.
 
Another issue is whether the video the NVR is receiving is the actual video from the actual camera. To this end, IDIS has a solution as well.
 
“From DirectIP v2.0 onwards IDIS has used certificate-based mutual authentication between all IDIS network cameras and IDIS NVRs to prevent spoofing from happening. Certificate information is exchanged during the camera registration to the NVR, which must be authenticated when the NVR communicates with the camera for the system to operate. In turn, this gives both systems integrators and end-users the assurance that IDIS technology is not only cybersecure but protected from the threat of spoofing,” Kim said.


Product Adopted:
Others
Subscribe to Newsletter
Stay updated with the latest trends and technologies in physical security

Share to: