CurriculoATS — AI applicant tracking system Curriculo

AI-Generated Resumes: How to Detect and Evaluate Them

An AI-generated resume is a job application document written partly or entirely by large language models like ChatGPT, Claude, or Gemini. Between 40% and 80% of job seekers now use AI tools to draft or polish their resumes, and 7 in 10 report getting higher response rates as a result. For employers, this creates a new challenge: how do you evaluate candidates when the writing no longer reflects the person?

How Common Are AI Resumes?

40–80%
of job seekers use
AI writing tools
7 in 10
got higher response
rates with AI help
77%
of hiring teams
encounter AI resumes

These numbers come from SHRM surveys, Canva hiring reports, and Resume Builder research conducted between 2024 and 2026. The trend is clear and accelerating. AI resume assistance is not a fringe behavior — it is the new default.

The question is no longer whether candidates are using AI. They are. The question is what you do about it.

How People Try to Detect AI Writing

AI Detection Tools

The Approach

Tools like GPTZero, ZeroGPT, and Originality.ai analyze text for statistical patterns that suggest machine generation — perplexity scores, burstiness, and token probability distributions.

The Problem

These tools were built for detecting essays and articles, not resumes. Resumes are naturally formulaic — short bullet points, standardized phrasing, industry jargon. This means they produce high false-positive rates on perfectly human-written resumes, especially from non-native English speakers who tend to use simpler, more predictable sentence structures.

Detection accuracy also drops as AI models improve and as candidates learn to edit AI output. A resume that was 90% AI-generated but lightly edited by a human will often pass detection tools.

Pattern Recognition

Common AI Tells

Some recruiters look for manual red flags: overly polished language, generic action verbs (“spearheaded,” “leveraged,” “orchestrated”), suspiciously perfect formatting, and content that reads more like a job posting than a personal history.

The Problem

These patterns also describe well-written human resumes. Candidates who attended writing workshops, hired resume coaches, or simply write well produce the same “tells.” Using these signals as rejection criteria penalizes good writers and benefits bad ones.

Why Detection Is a Losing Strategy

You Are Solving the Wrong Problem

Detection assumes that AI-written resumes are inherently dishonest. But using AI to write a resume is not fundamentally different from hiring a professional resume writer, using a template, or asking a friend to proofread. The candidate’s actual qualifications do not change based on who or what helped them describe those qualifications.

The real concern is not AI writing — it is fabricated content. A candidate who invents achievements, inflates numbers, or claims experience they do not have is a problem regardless of whether they used ChatGPT or a typewriter.

Spending resources on detecting AI writing style distracts from the question that actually matters: can this person do the job?

Signal-Based Scoring: Evaluate Outcomes, Not Prose

Signal-based scoring sidesteps the AI detection problem entirely. Instead of analyzing writing quality or style, it evaluates the substance of what a candidate claims: quantified achievements, scope of responsibility, career trajectory, and role fit.

Whether a candidate wrote “increased quarterly revenue by 35% through restructured sales process” by hand or with ChatGPT’s help, the underlying signal is the same: they claim a 35% revenue increase. The Impact Score evaluates that claim in context — does it align with their role level, company size, and industry norms?

This approach is inherently AI-resume-proof. Better writing does not inflate the score. Worse writing does not deflate it. The scoring evaluates what happened, not how it was described.

Detection vs Signal-Based Evaluation

DimensionAI Detection ApproachSignal-Based Scoring
What it evaluatesWriting style and patternsAchievements and outcomes
False positive riskHigh (good writers flagged)Low (substance-focused)
Affected by AI improvementsYes (accuracy declines over time)No (outcomes do not change)
Bias toward native speakersYes (non-native flagged more)No (language-agnostic)
Catches fabricationNo (only detects AI style)Yes (contextual plausibility)
Candidate fairnessPenalizes AI usersEvaluates everyone equally

What to Actually Do

  • Stop trying to detect AI writing — it is unreliable, getting less reliable over time, and penalizes candidates who write well or are non-native speakers
  • Focus on fabrication instead — look for vague achievements without numbers, impossible timelines, and titles that do not match described responsibilities
  • Use signal-based screening — evaluate measurable outcomes and career trajectory rather than writing quality. CurriculoATS AI screening does this automatically.
  • Verify in interviews — ask candidates to walk through specific achievements with details. Someone who fabricated “grew revenue 35%” cannot explain the sales restructuring behind it.
  • Update your resume policies — if you reject AI-assisted resumes, be prepared to also reject resumes from professional writing services. If that seems unreasonable, reconsider the AI policy too.
AI Resume Questions

How common are AI-generated resumes in 2026?

Research from SHRM and multiple hiring surveys indicates that 40–80% of job seekers now use AI tools to help write or polish their resumes. A 2025 Canva survey found 45% of job seekers used AI for resume writing, while other studies put the number higher. 7 in 10 candidates who used AI tools reported higher response rates from employers.

Can AI detection tools reliably identify AI-written resumes?

No. Current AI detection tools like GPTZero and ZeroGPT have significant accuracy limitations. They produce false positives on human-written text, especially from non-native English speakers, and miss AI-generated content that has been lightly edited. Detection accuracy drops further as AI writing models improve.

Should employers reject AI-written resumes?

Blanket rejection of AI-written resumes is not a practical strategy. With 40–80% of candidates using AI assistance, rejecting all AI-assisted resumes would eliminate a large portion of your talent pool — including strong candidates who used AI as a writing tool, not a fabrication tool.

What is signal-based scoring and how does it handle AI resumes?

Signal-based scoring evaluates measurable outcomes and achievements rather than writing quality or keyword density. It looks for quantified results (revenue generated, teams managed, projects delivered), scope of responsibility, and career trajectory. Whether a candidate wrote their resume by hand or used ChatGPT, the underlying achievements remain the same.

How does CurriculoATS evaluate AI-generated resumes?

CurriculoATS uses signal-based impact scoring that focuses on what candidates actually accomplished rather than how they wrote about it. Each candidate receives an Impact Score (0–100) based on quantified achievements, role fit, and career trajectory. This approach is AI-resume-proof because it evaluates outcomes, not prose quality.

What are the red flags for fabricated content vs AI-assisted writing?

The real concern is not AI writing style but fabricated content. Red flags include: vague achievements without specific numbers, job titles that do not match the described responsibilities, impossible timelines (led a 50-person team at a 10-person startup), and skills lists that read like a job posting copy-paste. These red flags exist regardless of whether AI was used.

Raise the standard
of hiring.

Screen resumes faster and reduce hiring time with AI-powered candidate screening tools.
Explore CurriculoATS today.