No Fake Applicants. No Applications.

Authenticity is an architecture, not an arms race. Here's how we built it.

May 14, 2026 10 min read Hiring Fraud, Fake Applicants, AI Recruitment

Google has reintroduced mandatory in-person interview rounds for remote roles. That single fact tells you the AI arms race in hiring is being lost by the company best positioned to win it. Behind it sits an industry now drowning in AI-generated CVs, deepfake interviews, fake LinkedIn profiles, and an organised North Korean fraud scheme that has funnelled $2.8 billion out of Western employers in the last two years. The response from most of the market is better detection. Our response is different: there is nothing to detect on describe.me, because there is no attack surface to attack.

The 2026 Authenticity Crisis, in Numbers

$2.8B

Funnelled to North Korea via fake IT workers in two years

59%

Of hiring managers suspect candidates of using AI to misrepresent themselves

1 in 3

Have caught a fake identity or proxy candidate during interview

Across the FBI, DOJ and security press, the picture is now public and growing. North Korean operatives have placed workers, using AI-generated identities and deepfake interview tooling, at hundreds of US firms including Fortune 500 companies. A US facilitator was sentenced to nine years in April for placing operatives at more than 100 employers. The FBI describes the underlying activity as the industrialisation of professional identity manipulation.

The same pattern is showing up on LinkedIn at platform scale. Security researchers and journalists now estimate that a significant share of remote-friendly roles on the platform never existed, while AI-generated profiles, AI-generated CVs, and deepfake video interviews compound to produce a candidate the recruiter has no realistic way of verifying before the hiring decision is made.

The signal no one expected: Google has reintroduced mandatory in-person interview rounds for remote and hybrid roles. The company with the most sophisticated detection tooling in the world is no longer willing to rely on it.

The AI Arms Race Is Architecturally Unwinnable

Look at what is now standard in a recruitment pipeline:

An AI helps a candidate write a CV. A different AI screens it. A third AI conducts a first-round video interview. A fourth AI tries to detect whether the candidate on screen is a deepfake. A fifth AI scores the answers. A sixth AI tries to detect whether those answers were ghost-written in real time. Each side gets better. Each round of detection trains the next generation of fakes. Volume goes up. Signal goes down.

The numbers reflect it. 89% of HR professionals in a recent Robert Half survey reported AI-generated CVs have increased their workload, with 61% saying it has lengthened time-to-hire. 77% of employers now actively screen for AI content in applications, and 62% reject any CV that pattern-matches as machine written. The "perfect" application has become an instant red flag, which only encourages the next generation of generators to be deliberately imperfect.

Meanwhile the volume tools at the candidate end are failing on their own terms. A documented LazyApply user sent 5,000 applications and received 20 interviews — a 0.5% response rate. One Sonara user on the top plan landed a single screening interview from 700 automated applications, while 200 manual applications from the same person produced three. The arms race is consuming attention on both sides and producing fewer real conversations every quarter.

This is not a problem better detection solves. It is a problem the funnel itself creates.

The Application Is the Attack Surface

Every form of recruitment fraud being indicted, sued, or breathlessly written about right now depends on one architectural decision: that hiring begins when a candidate submits an application against a job advert. That decision creates a target. Targets attract attackers. The target's defenders build screens. The attackers build screen-busters. The cycle never stops because the target never moves.

describe.me does not have that architecture. There is no application funnel on describe.me. Candidates do not apply. They build a profile describing their experience, their skills, and the role they actually want next, and recruiters then search the platform for the skills and aspirations they are looking for. The recruiter reaches out. The recruiter pays per real contact made. The candidate decides whether to engage.

The whole class of fraud that depends on submitting plausible applications against a job advert has no foothold in that flow. There is no advert to apply to. There is no shortlist to gatecrash. There is no automated screen to spoof.

Three Architectural Reasons Authenticity Is Built In

1. No applications, no target for spam

On describe.me, candidates do not push themselves at roles. Roles, via recruiters, come to candidates whose profile matches. The auto-apply category, the AI-generated CV category, the "5,000 applications in a weekend" category are all working against an architecture that does not exist here. You cannot spam-apply to nothing.

This is also why the recruiter-side problem of triaging AI-flood applications disappears. The Smart Matching engine ranks real profiles for real recruiter searches. Every candidate the recruiter sees built that profile themselves, willingly, over time, and can be contacted directly to confirm. There is no haystack to filter, because there is no haystack.

2. Aspiration-led profiles can't be reverse-engineered

A North Korean operative or a generative-AI tool can clone a CV against a job advert. That is now a trivially solved problem on the attacker side. What they cannot do is authentically synthesise the rest of what a describe.me profile contains: a candidate's specific career trajectory, their self-assessed skill confidence, their stated aspirations, the role they actually want next, their salary floor, their geographic constraints, their working pattern preferences. None of those signals are present in a job advert, so none of them can be optimised against one.

You can fake a CV that matches the words in a job description. You cannot fake an aspiration you have never articulated. You cannot fake a five-minute self-portrait of a career you have never lived. The describe.me profile is not a document the candidate writes in response to a target. It is a description of the candidate themselves.

3. Pay-per-contact breaks the spam economics on both sides

The recruitment fraud ecosystem depends on it being free or near-free to push spam through the funnel. On describe.me, every recruiter contact costs money. That single design choice changes the recruiter behaviour fundamentally: recruiters reach out to candidates they have read carefully and believe will engage. The candidates they hear from are not the by-product of a mass-blast. They are people a real person has decided are worth a real conversation.

In the other direction, the same economics rule out the attacker. There is no business model for an operative who has to be paid for, in real money, by a real recruiter, in order to reach a candidate. The cost asymmetry between free-to-attack and free-to-defend is what powered the AI arms race in the first place. We have removed it.

You can fake a CV against a job advert.
You cannot fake an aspiration you have never articulated.

What This Means in Practice

For recruiters

Every candidate surfaced to you on describe.me is a real person who built their profile themselves. The Smart Matching engine ranks them by genuine match quality against your role, with the score visible and explainable. You decide who to contact. The recruiter time spent on identity verification, CV authenticity checks, and deepfake-resistant interview design simply does not need to happen here. That time goes back to actual hiring.

From a legal and reputational standpoint, this also means the architecture is defensible. The recruitment lawsuits of 2026 ( Workday's Mobley class action, Eightfold's Kistler complaint, and the wave behind them) are challenging hidden auto-screening of applications against opaque criteria. We do not run auto-screening of applications because we do not have applications.

For candidates

The deepfake fraud crisis affects you whether you are committing fraud or not. When recruiters cannot tell real from fake, every real candidate gets treated as suspect by default. Process slows. Trust collapses. Genuine professionals are forced to compete with synthetic ones for recruiter attention, which is the precise failure mode doomjobbing is a verdict on.

On describe.me, your profile is one of a finite set of real profiles. You are not stacked against AI-generated competitors who exist only to game the funnel. The recruiters you hear from have already engaged with the version of you that you describe, not a synthetic version someone else fabricated to compete with you.

For the industry

Detection is a losing game. Every measurable trend confirms it. The companies investing most heavily in hiring-AI defence are also the ones reintroducing in-person interviews and slowing time-to-hire. That is not a technology problem. It is an architecture problem. And an architecture problem cannot be solved by buying more technology. It can only be solved by changing the architecture.

You don't out-run the AI arms race. You step out of it. The entire class of recruitment fraud requires an application funnel to operate inside. Remove the funnel and the fraud has nowhere to live.

What We're Not Saying

We are not claiming describe.me eliminates every conceivable form of fraud forever. Anybody who tells you that about any platform is selling something. What we are claiming is more specific and more durable: the dominant fraud patterns of 2025 and 2026, the ones generating headlines, lawsuits, indictments, and recruiter exhaustion, all require an application funnel to function. The describe.me architecture does not have one, and that removes the surface those patterns attack.

We are also not saying applications themselves are evil. They worked, more or less, in a world where the cost of submitting one was high enough to act as a filter. AI has collapsed that cost to roughly zero. In a world where filling out an application is free, asynchronous, and scalable to thousands per night, the application as a hiring primitive does not survive. We have built around that reality.

The Architecture Is the Argument

When people ask why describe.me does not have a fake applicant problem, the answer is not that we have better detection. The answer is that we have a different architecture. We do not have applicants in the legacy sense at all. We have professionals with profiles, searched by recruiters who pay to reach them, with scores and criteria visible to everyone in the loop. The closer you look at that design, the harder it is to find the place a fake would slot in.

Authenticity isn't a feature. It's not a checkbox. It's not a verification badge bolted onto a broken funnel. It is the architecture, or it isn't there.

A platform with no fake applicants. Because no applications.

Build a profile that describes the real you. Recruiters search the platform and reach out directly. No CV flood. No deepfake interviews to detect. No black holes to apply into.

Create Your Profile For Recruiters

describe.me: authenticity as architecture.