
know your criminal: testing out biometric onboarding
How biometric identity checks may not be enough to stop fraud
As biometric technology becomes a standard part of identity verification for banks and fintechs, we’re often told that it’s the future of fraud prevention. But what if those systems aren’t as secure as we think? This groundbreaking research initiative in partnership with We Fight Fraud, Lancaster University, and biometric verification firm APLYiD , led by Dr Nicola Harding, has revealed just how easily today’s Know Your Customer (KYC) processes can be bypassed by criminals. The whitepaper, Rapid Response to Fraud and Financial Crime, exposes vulnerabilities in digital onboarding, particularly for challenger banks, and highlights the urgent need for layered, adaptive approaches to identity verification.
What Was the Research About?
The study focused on testing the robustness of digital identity verification, specifically the biometric checks used in app-based account openings. With the rise of mobile-only “challenger” banks, customers are often onboarded without ever meeting a staff member or visiting a branch. Instead, they submit photos, videos and personal data via an app, often verified automatically by AI-powered systems.
The research aimed to explore:
How fraudsters bypass these digital KYC checks
What kinds of vulnerabilities exist in current onboarding processes
How these gaps could be exploited for financial crime, such as money laundering or terrorist financing
This wasn’t theoretical work. It was a live, 72-hour field test using real-world tactics that criminals employ—including social engineering and deepfake technology.
How the Research Was Done
The project was a “Rapid Response” model, designed to quickly identify emerging fraud threats and report findings within a month. The testing team included academics, ethical hackers, former law enforcement officers, and individuals with lived experience in financial crime.
Over a concentrated three-day period, the team attempted to open UK bank accounts using various techniques, including:
Fake or modified ID documents
Deepfake videos
Socially engineered data from members of the public
API injection attacks to manipulate app behaviour
Importantly, all testing was done ethically, without actually exploiting real users or systems beyond legal boundaries.
What Did the Research Find?
The findings were striking. The team successfully opened three bank accounts using compromised data or biometrics, and uncovered four key attack methods:
Legitimate ID – Wrong Biometrics: Using real ID documents with someone else’s face.
Real Biometrics – Fake ID: Applying with legitimate facial data but falsified documentation.
Legit ID – Deepfake Biometrics: Creating synthetic video content to impersonate real people.
API Injection: Intercepting app processes to feed in pre-recorded or deepfake content.
They also showed just how easily personal data could be harvested. In one example, researchers posed as a TV crew and quickly convinced members of the public to hand over biometric footage and passport copies – believing they were auditioning for a show.
The testing team noted a disturbing level of public willingness to share sensitive data without questioning how it would be used.
What Are the Implications?
These results are a wake-up call for the digital banking and fintech sector. Despite advances in AI and biometrics, many KYC systems are predictable, easy to manipulate, and inconsistently applied.
The report suggests several key takeaways for anyone working in fraud prevention or identity security:
Fraud Prevention Must Be Built In: Security can’t be an afterthought. Fraud controls must be designed into systems from the start, not layered on later.
Biometrics Alone Aren’t Enough: A layered approach is essential. Randomised KYC steps, manual review points, and additional checks can increase friction for fraudsters while maintaining ease for genuine users.
Public Education is Critical: The public’s lack of awareness about personal data misuse is a major vulnerability. Clearer communication and education on data protection are needed.
Think Like a Criminal: Effective fraud prevention requires an understanding of how real criminals operate, not just a checklist of compliance protocols.
Why It Matters
The findings couldn’t be more timely. With fraud now considered a national security issue in the UK and identity fraud costing the economy billions each year, the stakes are high. Criminals don’t need to be master hackers, just skilled manipulators of broken systems and human behaviour.
This whitepaper shows that smart fraud prevention isn’t just about tech. It’s about awareness, design, and collaboration between sectors. At .. for good, we’re proud to spotlight this research as an example of what’s possible when lived experience, academic insight, and technical expertise come together.
If you would like to read the full report, you can download it HERE.