In 2023, the Federal Trade Commission reported >31k cases of employment identity theft.
That was an 18% increase from the previous year. Many of these cases involved stolen IDs used to get jobs. Most would pass a standard background check.
But the problem has evolved. Candidates are now using AI tools to create fake faces and voices. Some use deepfake software in live interviews. In one case, a recruiter asked a remote candidate to move his hand in front of his face. The candidate refused. The image glitched, revealing a filter meant to hide who he really was. That story quickly spread.
Unfortunately, this is not rare. A 2025 survey found that 17% of hiring managers have seen deepfake interference during interviews. And it is expected to grow. Gartner predicts that by 2028, one in four job candidates could be AI-generated, complete with fake resumes and synthetic voices.
Deepfakes and AI Deception Are Already Here
The idea of a fake person interviewing for a job used to sound like science fiction. Today, it’s a weekly reality for many recruiters. In 2025, job candidates are using tools that can generate not just faces, but full digital identities, complete with AI-simulated voices and backgrounds that mimic real environments.
A recent wave of incidents has made the problem more visible.
In one documented case, a tech company cofounder was interviewing a candidate remotely when the person’s face glitched mid-call. The interviewer asked a simple question: “Can you move your hand in front of your face?” The candidate refused. That triggered suspicion, and the video feed briefly distorted, exposing the use of real-time deepfake software.
The implications are serious. Executives are now reporting that applicants are applying for roles with completely fabricated identities, including fake work histories and synthetic visuals. In high-risk industries like finance, defense, and healthcare, these tactics could create compliance violations or even insider threats.
The technology is advancing quickly. Candidates are now using AI not just to appear credible, but to actively avoid detection. Some recruiters are halting interviews altogether when a candidate cannot verify their physical presence. Others are revising their protocols to include pre-screening video identity checks that incorporate live biometric verification.
Deepfake impersonation isn’t a future threat. It’s already part of today’s hiring process.
Why Background Checks Alone Can’t Keep Up
Background checks have long been the foundation of hiring risk management. They confirm employment history, education, and criminal records. Manual identity verification, which often involves scanning a driver’s license or passport, has also been standard. But in 2025, this identity verification process stops short of verifying the person behind the documents.
These checks rely on static data. If someone uses a stolen ID or a synthetic profile built from fake documents, that information may appear clean. Employers might not detect anything wrong until much later. And with remote hiring becoming more common, the risk is even greater.
AI now plays a role on both sides. Candidates use it to create more believable digital personas, while background screening systems work to uncover them. But most checks still stop at the documents.
Industry leaders now recommend continuous and holistic screening, which starts with real-time identity verification and continues throughout employment. This includes not only checking credentials but also confirming who is actually behind them.
A resume may be accurate. A clean background report may come back. But without confirming the person in front of the screen, employers are missing the most important piece.
The New Screening Standard: Confirming the Person, Not Just the Paper
Employers are rethinking identity verification. Traditional checks focus on documents, but in 2025, that’s no longer enough. Verifying who a candidate is, not just what they’ve submitted, is becoming a basic requirement.
The most effective tools today combine ID document authentication with live biometric verification. How does it work? A government-issued photo ID is compared to a video selfie taken by the user, using liveness detection to confirm the person is physically there.
Speed is another motivator: employers often can’t afford to delay offers while they wait on manual reviews. This is where AI supports faster decisions, automatically matching faces, scanning documents, and flagging problems in real time. Systems like these reduce the risk of identity theft or fraud and improve hiring efficiency.
As identity fraud increases, document checks alone fall short. A candidate may have valid paperwork, but that doesn’t prove they’re who they claim to be. The new standard is confirming the person.
A Measured Solution for Modern Risk
Identity fraud is no longer just a background check problem. It’s a front-end issue. Employers need to know who they’re speaking to before making a hiring decision.
KRESS has launched Global ID to address this.
Global ID is an AI-powered identity verification system that verifies identity using two things: a government-issued ID and a live biometric check. The candidate submits an ID, takes a selfie video, and the system checks for a match. It also confirms the person is physically present using liveness detection.
If everything lines up, the result is marked verified. If there’s an issue, like poor image quality or mismatched faces, the system flags it. Employers can act on the result immediately. The process takes minutes, with a two-hour max, and is available seven days a week.
Tools like this are becoming essential.
We've found many employers are moving toward real-time digital identity verification to reduce hiring delays, prevent fraud, and streamline the overall onboarding process. Without it, candidates using fake documents or AI filters may still pass through standard screening.
Final Word: Stay Ahead Without Falling Behind
Fraud tactics have changed. Some candidates are using AI to build entire digital identities. Others are passing traditional background checks using fake documents. And in remote hiring, the risk is even higher.
Verifying submitted information is no longer enough. Employers need to confirm the person. That means adding real-time identity checks at the start of the hiring process. It’s not about doing more. It’s about closing a critical gap.
Tools like Global ID are part of that shift. They help employers confirm who they’re hiring without slowing down the process. And they support compliance with standards like ISO 27001, which require stronger data security and identity controls.
Screening is not just about checking the past. It’s about verifying who is in the room, before they’re in the system.
Add Global ID to your screening toolkit
The risks are real, and they’re already here. AI-generated candidates, deepfake interviews, and stolen credentials are testing the limits of old systems. If you're hiring remotely, quickly, or at scale, it's time to take identity seriously.
Learn how Global ID helps you verify identity before day one.