Increasingly, biometric solutions are becoming vulnerable to deepfake and spoofing attacks, which are now easier to execute thanks in part to generative AI and related technologies.
Nowadays, biometric systems are deployed at various end user entities. Increasingly, however, biometric solutions are becoming vulnerable to deepfake and spoofing attacks, which are now easier to execute thanks in part to generative AI and related technologies. Making products more secure and robust against these attacks, therefore, has become imperative for biometrics solutions providers.
Biometrics are the “what you are” factor in access control. Since biometric solutions work by way of authenticating users’ biological traits, they were once considered more secure than other means of access control. More and more, however, biological identities can be faked, rendering biometric solutions vulnerable to manipulative techniques such as deepfakes (synthetic multimedia files that appear to be real) and spoofing (the act of faking someone’s identity to gain unauthorized access).
“In access control, where the integrity of identity verification is critical, this poses significant risk. We’re seeing a marked increase in attempted presentation attacks – ranging from high-resolution printouts to sophisticated video injections – targeting face and voice modalities in particular,” said Alex Tan, Region Sales Head for ASEAN and East Asia at IDEMIA.
It’s important to note that increasingly, deepfake and spoofing attacks have become easier and less costly to execute, thanks to contemporary technologies such as generative AI.
“What’s changed in the last two years is the democratization of generative AI. Anyone with a laptop and minimal technical knowledge can now generate convincing fake videos or clone someone’s voice,” said Francesco Cavalli, Co-Founder, COO and Head of Threat Intelligence at Sensity AI.
“With generative AI tools now widely accessible, attackers can create synthetic faces, voices, and fingerprints that bypass traditional biometric systems. Security teams must understand: this is no longer theory – it’s reality. Attackers no longer need technical sophistication. They can download tools for free or purchase spoofing kits for as little as US$5,” said Kumar Ritesh, Founder and CEO of CYFIRMA.
Modalities that are vulnerable
It is agreed that face and voice are the two biometric modalities that are most vulnerable to deepfake and spoofing attacks due to the wide availability of data and resources on the Internet.
“Face and voice can be remotely captured from publicly available data – photos, videos, social media posts, Zoom calls – and replicated using AI tools. For example, facial recognition systems can be spoofed with high-quality deepfake videos displayed on screens or 3D masks, while voice authentication systems are vulnerable to AI-generated synthetic speech mimicking the tone, pitch, and cadence of a specific individual,” Cavalli said.
According to Jeanne August, Business Development Executive at BioID, biometric modalities can be ranked as below by vulnerability:
- Face recognition – High vulnerable: Attackers can use deepfake videos, 3D masks, or even AI-generated avatars to bypass systems, especially if liveness detection is weak;
- Voice recognition – Also very vulnerable: AI can clone voices with just a few seconds of recorded audio. These deepfake voices can fool voice authentication systems if they lack robust verification layers;
- Fingerprint scanning – Moderately vulnerable: Physical spoofing using silicone or lifted prints remains a concern, but less so for deepfakes since it requires physical access;
- Iris recognition – More secure, though not immune: High-resolution images of irises or contact lenses with iris patterns can be used for spoofing, but such attacks are harder to execute.
Advanced security features imperative
Amid increased deepfake and spoofing attacks, adding advanced security features to biometric products has become imperative for solutions providers.
“The exponential advancement in generative AI and deepfake synthesis technologies necessitates proactive security posturing. Organizations must anticipate the convergence of digital and physical attack vectors and implement adaptive security frameworks capable of evolving alongside emerging threat landscapes. The development of reliable deepfake detection technologies will likely become essential to ensure the future security of access control systems,” said Hanchul Kim, CEO of Suprema.
“In today's environment, anti-spoofing capabilities are not optional – they’re mission-critical. Solutions providers must integrate proactive liveness and spoof detection measures that can dynamically adapt to evolving attack vectors. At IDEMIA, we believe this is a core responsibility, especially when dealing with national enterprise-grade security infrastructure, ID programs and border control,” Tan said.