With deepfake and spoofing attacks becoming more widespread, being able to detect whether an identity is fake or real has become critical. In this regard, AI can play a key role.
With deepfake and spoofing attacks
becoming more widespread, being able to detect whether an identity is fake or real has become critical. In this regard, AI can play a key role. This article explores how AI can be used as a countermeasure against deepfake and spoofing attacks.
Increasingly, access control systems, especially those based on biometrics such as face and voice, have become targets of deepfake and spoofing attacks, which fool the system with faked identities. Such attacks are now easier to execute thanks to generative AI and related technologies. Against this backdrop, it has become imperative for biometrics solutions providers to harden their systems against these attacks.
“As deepfake and spoofing threats grow, solutions providers have a responsibility to integrate anti-spoofing and deepfake detection capabilities directly into their systems. Failing to do so undermines user trust and system integrity. Detection mechanisms should no longer be optional add-ons – they must become standard components of modern access control technologies,” said Jeanne August, Business Development Executive at BioID.
How AI can help
That said, detecting whether an identity is real or a deepfake/spoofing attempt has become all the more important. Luckily, AI can help in this regard. According to August, below are how AI can play a central role in spoofing and deepfake detection:
- Liveness detection: Uses AI to check for real-time biological indicators to ensure the subject is live and not a video or mask;
- Deepfake artifact detection: AI models (often convolutional neural networks or transformers) are trained on datasets of real vs. deepfake samples to spot telltale signs like unnatural lighting/shadows, inconsistent eye movement, pixel-level anomalies and frame jitter or inconsistencies in lip sync;
- Voice anti-spoofing: AI detects anomalies in vocal patterns, frequency artifacts, or lack of natural breath/noise, which are often found in synthesized audio;
- Challenge-response systems: The system asks users to perform randomized actions (for example smile, turn head, say a phrase), which are difficult to fake in real time using deepfake media.
Countering deepfakes with AI
Indeed, realizing how powerful AI can be in countering deepfakes and spoofing attacks, solutions providers have rolled out AI technologies to protect users against these threats.
“At CYFIRMA, we’ve developed an AI-powered detection engine that combines liveness analysis, behavioral biometrics, and multi-modal verification to identify and block deepfakes and spoofing attempts. Our solution leverages advanced AI techniques – CNNs for facial and fingerprint spoof detection, RNNs for voice anomaly analysis, and GANs for adversarial training. The engine continuously learns and adapts to new attack patterns,” said Kumar Ritesh, Founder and CEO of CYFIRMA.
“At Sensity, we build advanced deepfake and spoofing detection technology that helps organizations verify the authenticity of visual and audio data. Our system uses an AI-powered multi-layered forensic approach, combining pixel-level forensic analysis, audio forensics, temporal consistency checks and file metadata forensics,” said Francesco Cavalli, Co-Founder, COO and Head of Threat Intelligence at Sensity AI.
Biometrics solutions providers, meanwhile, have also enhanced their systems with AI-based deepfake/spoofing detection features. IDEMIA, for example, employs advanced liveness detection and AI-based presentation attack detection (PAD) engines as part of their biometric authentication algorithm and framework. “These systems analyze multimodal cues – such as cross model consistency check, 3D facial topology, reflectance properties, and analyzing various digital traces left by deepfake generation processes and detecting inconsistencies that are hard for synthetic media to mimic, helping to differentiate genuine biometric traits from spoofed inputs,” said Alex Tan, Region Sales Head for ASEAN and East Asia at IDEMIA.
Suprema also employs cutting-edge spoofing-detection technologies in their biometrics solutions. “Suprema facial authentication technology has passed the ISO/IEC 30107-3 PAD test, administered by iBeta, confirming its resilience against facial spoofing attempts,” said Hanchul Kim, CEO of Suprema, adding their fingerprint recognition technology also incorporates deep learning and advanced anti-spoofing technology to detect fake fingerprints.
“The Live Finger Detection (LFD) technology effectively identifies fake fingerprints made from materials like rubber, paper, film, clay, and silicone by analyzing unnatural features and inconsistencies. Additionally, Suprema's fingerprint recognition technology has received Grade 1 certification in the "Fingerprint Recognition Performance Test" conducted by the Korea National Biometric Test Center (K-NBTC) … which issues certificates only when the performance test standards and procedures recommended by international organizations such as ISO/IEC and JTC1/SC37 are met,” Kim said.