Close-up of a young man's face with digital blue lines and glowing effects superimposed on the right side, suggesting themes of technology, future, or human-computer integration.

We wrote about fake candidates a few years ago. At the time, it felt like a fringe concern, something to be aware of, and mostly a curiosity for tech-adjacent companies.

That was then.

Today, the fake candidate problem has become one of the most pressing risks in technical hiring, particularly for engineering roles. The scale has exploded, the tools have become frighteningly convincing, and the motivations range from individual fraud to state-sponsored infiltration. If you’re hiring for technical roles, especially remotely, this is not a risk you can afford to wave off.

At Creative Alignments, we see this firsthand. We’ve taken over searches where candidates already in a client’s pipeline turned out to be fraudulent. We’ve developed a sharp eye for the tells. And we want to share what we know, because good recruiters, even experienced internal ones, are getting fooled.


Why Technical Roles, and Why Now

You likely won’t encounter this problem when hiring a project manager or a sales rep. Fake candidates overwhelmingly target software engineering positions, and especially roles involving AI, machine learning, and cloud infrastructure. The reasons are straightforward:

  • These roles are often remote, removing the natural friction of in-person interactions that would quickly expose fraud.
  • They offer access to sensitive systems including source code, proprietary data, and internal infrastructure.
  • The pay is high, making even a few weeks of employment financially worthwhile at scale.
  • Access to an expensive new laptop or other equipment for a few interview sessions is easy money.

The fraudsters’ motivations fall into two broad categories. Some are purely economic: get hired, collect a paycheck, disappear before anyone catches on. Others are far more concerning: gaining a foothold inside a company to steal trade secrets, intellectual property, or sensitive customer data. In some documented cases, the threat is geopolitical. U.S. authorities have now prosecuted hundreds of cases involving North Korean state-sponsored workers who used stolen American identities to infiltrate tech companies. According to CrowdStrike’s 2025 Threat Hunting Report, the number of companies that unknowingly hired fake workers from this scheme grew 220% in a single year. Mandiant’s chief technology officer stated publicly that virtually every Fortune 500 company has received applications from North Korean IT workers, and most CIOs he’s spoken with have admitted hiring at least one.

This isn’t a theoretical risk. It’s happening right now, including to companies far smaller than the Fortune 500.


What a Fake Candidate Looks Like on Paper

The most common entry point is still the application, a resume submitted to an open role. But fake candidates also show up in proactive sourcing, where fabricated profiles on LinkedIn or job boards look polished and convincing.

What you’re looking for:

No LinkedIn presence, or a suspiciously thin one. A senior engineer with 10 years of experience and no LinkedIn profile, a profile with no photo (or a small, distant, or AI-generated image), no connections, no endorsements, and no employment history that checks out are all red flags. Real professionals leave digital footprints. A profile with 500+ connections but no comments, endorsements, or mutual connections from companies they claim to have worked at  are high-risk signals.

GitHub activity that doesn’t add up. For engineering candidates, look at contribution history over time. A sudden burst of commits in the last 30 days on a five-year-old account often indicates a purchased or borrowed profile. When you get to the interview stage, pick a specific commit from six or more months ago and ask the candidate to walk you through the logic. A fraudster cannot navigate a codebase they didn’t write.

A resume that’s almost too good. AI tools are now routinely used to optimize fake resumes against applicant tracking systems, eliminating grammatical errors and tailoring keywords precisely to job descriptions. A resume that reads like it was written directly in response to your posting, hitting every requirement in order flawlessly, deserves extra scrutiny. Consistency through unusual periods (like the COVID era, when many professionals had gaps or pivots) can also be a tell.

Geographic inconsistencies. When we were vetting a candidate for a client, we noted an address in Santa Fe Springs, California. A simple, casual question (“Is that in northern or southern California?”) revealed the candidate had no idea. (It’s near Los Angeles.) This kind of geographic check costs nothing and can expose a fraud in seconds.

The same face across multiple profiles. Fraudsters often reuse AI-generated photos. A reverse image search on a profile picture sometimes reveals the same face appearing across multiple fake identities. And always verify that the photo on a LinkedIn profile actually matches the person on your video screen.


The Interview: Where It Gets Sophisticated

Resume red flags are relatively easy to learn. The harder challenge is the video interview, where technology has dramatically raised the stakes. There are now three distinct types of fraud your interviewers need to recognize.

Deepfake video (AI-generated faces and Digital Twin technology). Using widely available, inexpensive software, a fraudster can superimpose a different face onto their live video feed in real time, complete with lip-syncing and voice cloning. Deepfake detection during a live interview is imperfect but the tells are learnable. (Studies suggest untrained humans catch deepfakes at only slightly better than chance.) Look for blurring, flickering, or pixel distortion around the edges of the face and hair, especially when the candidate moves. Watch for static, unnatural eye contact, or eyes that seem to slide rather than move organically. If something feels visually “off,” it probably is.

To test it directly, ask the candidate to wave their hand in front of their face, or to turn their head fully left and right. Real-time AI face mapping frequently glitches when an object passes in front of the face (fingers may appear to “melt” into the cheek) or when the face moves to a side profile that breaks the tracking. Frame it professionally: “We do a quick security calibration at the start of all our video interviews. Could you wave your hand across your face and turn your head side to side?” Most real candidates won’t blink. A fraudster may suddenly develop technical difficulties.

Proxy interviews (someone else speaking for the candidate). In this scheme, the person on screen is real but someone else (more skilled, often off-camera) is providing the answers, either by speaking for them or feeding answers via earpiece or text. Tells include audio that is perfectly clear while the mouth movements look slightly behind or “mushy,” excessive coughing or hand-covering of the mouth (tactics to mask lip-sync gaps), and the sound of typing or whispered prompting that doesn’t match what the candidate’s hands are doing. A consistent, unnatural pause of three to five seconds before every answer, even simple icebreakers, is a strong signal that someone is waiting on a prompt.

AI-assisted interviews (the candidate is reading real-time AI answers). This is the most common and hardest to detect. Products designed for exactly this purpose listen to interview questions and generate answers fast enough for the candidate to read back naturally. The visual tell is a gaze that keeps drifting slightly above or beyond the camera, while scanning text on a second screen. The conversational tell is a cadence that sounds polished but hollow (perfect industry jargon, clean structure, zero messiness). These candidates fall apart when you interrupt the flow.

How to break the fake job candidate system in real time

How to Break the System in Real Time

If something feels off, the best move is to disrupt the candidate’s rhythm rather than confront them directly. Here’s how our recruiters do it.

Interrupt with a specific tangential question. AI-generated answers and proxy scripts rely on momentum. If a candidate is giving a suspiciously polished, long-winded answer, cut in with something specific and unscripted: “Sorry to jump in… on that project, who was the person who pushed back on the idea the most, and how did you handle them specifically?” A real candidate pivots instantly. A fraud loses their place.

Ask for the messy details. AI handles “what” and “how” reasonably well. It struggles badly with the texture of actual human experience. Try: “Tell me about a day on that project where everything went wrong. What was the very first thing you did?” or “What’s the exact feedback your manager gave you that you disagreed with?” Genuine candidates remember the stress and the specific details. Fraudulent candidates retreat to safe, high-level language.

Force a screen share. Ask the candidate to open a blank document and sketch out a system architecture, or pull up a GitHub repo they mentioned and walk through the logic of a specific function. This is the environment where proxy and AI-assisted fraud is hardest to sustain. Sudden “permissions issues,” “internet lag,” or a camera that stops working the moment you ask to see their screen is a critical red flag.

Don’t confront during the call. If you suspect fraud, don’t tip your hand. Continue the conversation while documenting the specific anomalies you’re observing: timing of pauses, visual artifacts, eye movement patterns. Confirm your suspicions after the call, and escalate through your internal process or legal team as appropriate.


What Good Vetting Actually Looks Like

Catching fake candidates isn’t about one quick fix. It’s about building layers of verification that make fraud progressively harder to sustain.

Set expectations before the interview starts. Let candidates know in advance that interviews are conducted on video with cameras on throughout, and that you perform a brief security calibration at the start of each call. Legitimate candidates appreciate the transparency. Fraudsters sometimes self-select out before you even meet them.

Verify identity before interviews go deep. Requiring government-issued ID, submitted through a secure channel, before a final-stage interview is becoming standard practice in tech hiring. Compare the ID to the live video appearance. This step alone stops a significant portion of fraud.

Use live, unscripted technical assessments. Ask candidates to solve real problems in real time, with their screen shared and camera on. Watching someone think, troubleshoot, and explain their reasoning makes proxy-based schemes extremely difficult to sustain. Take-home assessments are easy to outsource; live problem-solving is not.

Call references directly and don’t just accept the numbers provided. Find the company’s main number independently and route to the manager listed as a reference rather than using the contact information the candidate supplies. Ask references specific questions that would be hard to fake: What project did you work on together? What was a weakness you coached them on?

Require some form of in-person contact. Even a single in-person touchpoint, an office visit or a meeting at a co-working space, is a powerful deterrent. Fraudsters rely entirely on the invisibility of remote hiring. Simply mentioning that you require in-person verification at some stage will cause many fake candidates to withdraw.

Consider identity verification services. A growing category of tools, including Jumio, Socure, and iDenfy, provides automated identity verification that goes beyond a recruiter’s manual review. For high-sensitivity technical roles, these services are increasingly worth the modest cost.

Pay attention to the digital background. Fake candidates will never present themselves with a natural background. In and of itself, having a digital background is not bad. Many people use them. But, pay attention to this detail when it is combined with other suspicions. 

Trust the discomfort. Experienced recruiters at Creative Alignments consistently describe a specific feeling when something is off: a stiffness to the answers, a too-perfect resume, a reluctance to go deeper on any one topic. That instinct is real and worth following. The cost of an extra verification step is trivial compared to the cost of a bad hire, or worse, a security breach.


Why a Good Recruiting Partner Changes This Equation

Even excellent internal recruiters are struggling with this problem, and not because they’re not skilled. Spotting fraud is a rapidly evolving specialty that requires pattern recognition built across hundreds of candidate interactions, not dozens. When you’re hiring one or two engineers a year, you don’t see enough to develop calibrated instincts.

We have. Our recruiting team has been building and refining fraud-detection protocols specifically for high-risk technical roles, including the field playbook that informed much of this article, developed by one of our senior recruiters, Courtney Dagayev. When clients have come to us mid-search, with candidates already in process that they weren’t sure about, we’ve been able to identify the issues they couldn’t quite put their finger on.

In the age of AI, the human judgment at the center of great recruiting has never mattered more.