Deepfake threat to face biometrics prompts enterprise doubt by 2026
By 2026, 30% of enterprises will question the reliability of identity verification and authentication solutions due to the rise in AI-generated deepfake attacks on face biometrics, Gartner predicts. These artificially-generated images, known as deepfakes, mimic real persons' faces and have caused a significant disruption in biometric authentication procedures, as they may hinder the identification of whether the face of the person being verified is real or a deepfake, according to Akif Khan, VP Analyst at Gartner.
"Current standards and testing processes to define and assess presentation attack detection (PAD) mechanisms do not cover digital injection attacks using the AI-generated deepfakes," Khan adds. These presentation attacks are the most common attack vectors, but injection attacks, which effectively 'inject' artificial deepfake videos directly into the biometric verification process, surged by 200% in 2023, per Gartner's research. The analyst firm advises that preventing such attacks will necessitate a combo of PAD, injection attack detection (IAD) and image inspection to enhance the reliability of face biometrics.
Gartner's investigation proposes that to counter AI-generated deepfakes beyond face biometrics, chief information security officers (CISOs) and risk management leaders need to opt for vendors who demonstrate capabilities and a plan that remains ahead of current standards and keep these emerging threats in check. Khan suggests, "Organisations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection."
When these strategies and baselines are set, CISOs and risk management leaders should incorporate extra risk recognition signals like device identification and behavioural analytics to maximise the detection of attacks on their identity verification procedures. Leaders in charge of identity and access management are encouraged to pick technologies that can authenticate genuine human presence and execute additional measures to hinder account takeover from mitigating the risks of AI-driven deep fake attacks.
This wide-ranging scope of AI-centric cyber threats and strategies for dealing with them is the focus of an upcoming report titled "Predicts 2024: AI & Cybersecurity — Turning Disruption into an Opportunity". In addition, these topics will be under discussion at the Gartner Security & Risk Management Summit due to take place on 18-19 March in Sydney. Senior Gartner analysts will present the most recent research and advice for security and risk management leaders.