Deepfakes have introduced new
identity management challenges for financial institutions. Especially, deepfakes can be used to bypass biometric verification that is part of the bank’s know-your-customer (KYC) vetting process. This article looks at deepfakes as a form of identity theft, how they can abet financial fraud and how the situation can be remedied, based on Sensity’s State of Deepfakes 2024 report.
What are deepfakes
Deepfakes are AI-generated video clips or images depicting individuals that aren’t really them. Deepfakes are often used in scams, propaganda, political subversion and other similar activities. In 2022, a video emerged featuring a somber-looking Volodymyr Zelenskyy asking Ukraine forces to surrender. In 2023, a video of Hillary Clinton endorsing Ron DeSantis for president appeared. Both proved to be deepfakes.
In fact, according to Sumsub, there has been a 10-time increase in the number of deepfake incidents detected globally across all industries from 2022 to 2023, with a 1,740 percent surge in North America, 1,530 percent in APAC and 780 percent in Europe. At the same time, the world is seeing a proliferation of deepfake creation tools, which have become more available, more accessible and easier to use.
Deepfake financial fraud
Today, deepfakes are used in KYC frauds by hostile actors with the intent to deposit or launder money coming from illicit online activities. And the situation is likely to get worse. Deloitte’s Center for Financial Services predicts that generative AI with which deepfakes are made could enable fraud losses to reach US$40 billion in the United States by 2027, from $12.3 billion in 2023, a compound annual growth rate of 32 percent.
The hostile actors in deepfake identity theft cases are often tech-savvy and know the bank system well. “The attackers, often part of organized cybercrime syndicates, possess in-depth knowledge of both artificial intelligence technologies and the security systems they aim to compromise. These tools are not just advanced in their technical capabilities but also increasingly accessible, making it easier for fraudsters to execute these operations seamlessly,” the Sensity report said.
According to the report, deepfake KYC fraud involves the use of altered or manipulated video/images to bypass biometric verification processes used in online banking, digital payment systems, and any services that use facial recognition for identity verification. The paper further details the anatomy of a deepfake scam as follows.
First, the bad actor collects the potential target’s personal information and photo ID, for example passport. The attacker then looks for the selfie images of the target on the Internet or across social media to build the deepfake model. Then the prepared deepfake is deployed during the KYC process, whereby a virtual camera or emulator feeds the generated images or video directly into the bank’s biometric verification system. According to the report, if the attack is successful, the system will be deceived into recognizing the deepfake as the legitimate user, thus granting unauthorized access or approval.
Countering deepfake financial fraud
Indeed, deepfakes bring new identity management challenges to various entities, including banks and
financial institutions. According to the report, dealing with these challenges will require concerted efforts from policymakers, technology developers, and the public to safeguard against the potential misuse of deepfakes. Deepfake detection solutions, such as those by Sensity, have also become vital for end user entities as well.
“We equip companies and government agencies with the most advanced multilayer detection technology for AI-manipulated and fully synthesized video, images and audio. Our pixel level analysis focuses on the content to spot signs of manipulation and synthesis such as: face swap, lip sync, face reenactment, face morphing and AI-powered human avatars,” the report said.
Other security solutions providers, meanwhile, have also announced anti-deepfake and related solutions. For example, on the issue of whether a network video recorder is receiving actual video from an actual camera, IDIS has the solution.
“From DirectIP v2.0 onwards IDIS has used certificate-based mutual authentication between all IDIS network cameras and IDIS NVRs to prevent spoofing from happening. Certificate information is exchanged during the camera registration to the NVR, which must be authenticated when the NVR communicates with the camera for the system to operate. In turn, this gives both systems integrators and end-users the assurance that IDIS technology is not only cybersecure but protected from the threat of spoofing,” said Peter Kim, Global Technical Consultant at IDIS, in a previous
interview with asmag.com.