Face morphing and identity fraud: understanding and combating a growing threat
Face morphing has evolved from an experimental image editing curiosity to a growing threat to digital identity systems, border control, and KYC processes. Let's start with the basics!
What is face morphing?
A face morph blends the facial features of two or more individuals into one convincingly realistic image. It can fool both humans and automated face recognition systems!
Why it’s a real security risk?
An individual denied entry to a country could combine their image with that of a legitimate citizen who resembles them. The resulting photo could be used to obtain an identity document, such as a passport, that passes both human border inspection and automated facial recognition.
These attacks exploit the fact that recognition algorithms and trained officers are not foolproof. When both human and automated systems are deceived, fraudulent identities can be registered or used undetected.
How face morphing is used in fraud
The threat extends beyond entry points!
Shrestha (our expert) notes that morph attacks increasingly target Know Your Customer (KYC) procedures and remote onboarding in banking and other regulated industries.
- Attackers use morphed images to create synthetic identities that pass identity checks. These false identities enable money laundering, fraud, or other illicit activities.
- Such cases have grown as generative AI and synthetic media tools advanced. Off-the-shelf diffusion or GAN (Generative Adversarial Network) models now produce high-quality, photo-realistic faces that evade older morph detection techniques.
How morphing techniques have evolved
- Earlier landmark-based algorithms used geometrical markers on facial features like eyes, nose, and mouth to blend pixel data from multiple faces. Early results were often crude.
- To improve realism, print-scan attacks were introduced, where the morphed image is printed, scanned, and re-uploaded. This hides digital artifacts and mimics noise patterns of real photos.
- Today, AI-driven diffusion models have replaced older methods. These models generate complex, high-resolution, near-perfectly aligned images in seconds, often indistinguishable from authentic portraits.
Shrestha noted, realistic fakes can now be generated by simple text prompts using online tools.

Why face morphs fool both humans and AI?
Humans rely on holistic perception to recognize faces as unified patterns rather than discrete features. A morphed face, balancing between identities, exploits this cognitive process.
AI systems are vulnerable similarly. Face comparison algorithms look for shared biometric features. A well-constructed morph retains valid traits from both subjects, causing false matches for each individual.
Interestingly, Shrestha’s experiments suggest younger people may outperform older observers in spotting AI-generated or synthetic faces.
Familiarity with digital filters and social media imagery might be the reason.
Policy and standards are catching up
Regulatory and standardization bodies increasingly recognize the problem.
European standards for remote identity proofing included face morph detection became a mandatory requirement this year.
Countries relying on user-supplied document photos remain especially at risk. Systems that perform live image capture during enrollment (as Norway does for passports) face significantly lower exposure to morphing attacks.
Human cognition and training: improving detection
One recurring theme in Shrestha’s research is the potential of human training. Experiments show that individuals with face comparison training detect morphs significantly better than untrained observers. Combining judgments from several trained evaluators further improves reliability.
However, as AI synthesis improves, training data and educational materials must evolve to stay effective.
The explainability challenge in AI detection
- Explainability is essential in fraud detection systems. When AI flags an image as fraudulent, regulators and legal authorities must understand why.
- AI systems that show which facial areas or image features triggered the decision provide transparency and trust.
- Explainable models are crucial for operational performance, scalability, and legal defensibility in regulated sectors like finance and public security.
Balancing policy, technology, and human awareness
Technology advances continuously, offering opportunity and risk. Commercial innovation in visual AI is rapid, and fraudsters adapt those tools for malicious purposes.
For policymakers and biometric developers, the challenge is to stay ahead. Stronger regulation, better human training, and resilient biometric pipelines including presentation attack detection and morph analysis are essential!
Moving forward
Face morphing threatens digital identity systems, KYC processes, and border management. Combining human expertise, cognitive insights, and advanced AI countermeasures is the way forward.
At Mobai, we design biometric verification solutions integrating secure identity proofing and anti-spoofing technology, including tools to detect synthetic images, deepfakes, and morphing attacks. By combining explainable AI models with liveness and image integrity checks, organizations maintain trust and security throughout the digital identity lifecycle.
If your organization explores ways to strengthen biometric verification against face morphing and other presentation attacks, contact us for a demo or consultation. Our experts help integrate secure, compliant, and resilient authentication workflows built for the next generation of digital threats.
In a recent episode of Face Matters, Brage Strand from Mobai spoke with Bhanu Shrestha, a PhD researcher at NTNU Gjøvik, about how face morphs are created, misused in identity fraud, and how to detect and prevent them.
This discussion highlights the intersection of human cognition, artificial intelligence (AI), and biometric verification, emphasizing the urgent need for robust defenses against morph-based attacks.

