Hiring AI is no longer an unregulated frontier. New York City Local Law 144 requires independent bias audits for automated employment decision tools. The EU AI Act classifies employment AI as high-risk and demands transparency, human oversight, and auditability. California, Illinois, and Colorado have all introduced related legislation.
Companies using legacy ATS platforms are increasingly exposed. Not because AI in hiring is inherently unsafe, but because most legacy tools weren’t designed to be auditable.
Why keyword matching fails modern compliance
Every legacy ATS (Greenhouse, Lever, Workable, Ashby, Manatal) uses some form of keyword-based matching under the hood. Boolean token extraction, TF-IDF scoring, or embedding-based semantic similarity. All three methods share a fundamental problem: the decision is opaque.
When a keyword-based ATS tells you a candidate scored 72 and another scored 58, the reasoning lives in floating-point vectors and token weights. Recruiters can’t explain the difference in plain English. Auditors can only sample it statistically. Candidates can’t be told why they were rejected.
Modern compliance expects explainability. NYC Local Law 144 requires employer notification and alternative processes. The EU AI Act calls out transparency requirements for high-risk employment AI. Black-box keyword filters can’t meet these requirements structurally.
The adversarial vulnerability
Keyword matching has a second compliance problem: it’s structurally vulnerable to adversarial resume poisoning. Candidates who know the filter rules inject keywords in hidden white text, repeated bullet points, or fake skill sections to game the score. This biases outcomes toward candidates who have learned the system, not candidates who have done the work.
When a protected-class analysis runs on a keyword-poisoned candidate pool, you get misleading results. The system looks fair because everyone has the same opportunity to game it, but the candidates who succeed are the ones who figured out the game. This is a different, subtler unfairness.
How outcome-based AI changes the compliance picture
Outcome-based AI reads resumes for measurable outcomes rather than keyword presence. Curriculo’s model evaluates four categories of signal:
For every candidate, CurriculoATS produces two artifacts: a 0-100 fit score and a full written reasoning paragraph. The paragraph explicitly references which outcomes matched the job requirements and where the candidate fell short.
Sample: “Candidate scored 84. Strong alignment on ‘systems shipped’ signal: led migration of fraud detection pipeline handling 2M events/sec in production. Partial alignment on ‘team scope’: managed 3 direct reports for 18 months. No clear evidence of revenue impact. Resume lacks specificity on recent role, which limited the score.”
This architecture satisfies three structural requirements that black-box keyword matching fails:
1. Explainability
Every decision has a written explanation. Recruiters can read it. Auditors can sample it. Candidates can be told why they were scored the way they were. When NYC Local Law 144 requires that candidates be informed about AI screening and offered alternatives, the written reasoning makes that conversation possible.
2. Auditability
A statistical bias audit on an outcome-based system can examine both the scores AND the reasoning. Auditors can read 100 reasoning paragraphs across a protected-class population and verify the AI is citing the same signal categories for each group. Bias that shows up as “unexplained score variance” in a keyword-based audit becomes visible as “reasoning that references different signals for different groups” in an outcome-based audit.
3. Adversarial resistance
Hidden white-text keyword dumps add zero signal to outcome-based models. Candidates who write specific, accurate descriptions of their real outcomes score well. Candidates who try to game the filter don’t. This reduces the specific kind of unfairness that benefits candidates who learned to game legacy ATS filters.
The specific laws you should know
NYC Local Law 144 (2023)
Applies to any employer using automated employment decision tools (AEDTs) to screen candidates for NYC jobs. Requires:
- Annual bias audits by an independent auditor
- Public disclosure of the audit summary
- Candidate notification that AI is being used (minimum 10 business days before AEDT use)
- Alternative processes for candidates who request them
Enforcement: NYC Department of Consumer and Worker Protection. Fines start at $500 per violation and scale.
EU AI Act (2024)
Applies to any AI system used for hiring decisions affecting EU candidates or employers operating in the EU. Classifies employment AI as high-risk, requiring:
- Risk management systems throughout the lifecycle
- High-quality training data
- Transparency and explainability to users and affected persons
- Human oversight of decisions
- Robustness, accuracy, and cybersecurity
Enforcement: phase-in through 2025-2027. Fines up to 7% of global annual turnover or €35 million, whichever is higher.
We don’t claim specific regulatory certifications. Compliance always requires an independent audit in the context of each deployment, renewed annually. What we can say is the architecture is designed to pass an audit rather than fight one.
How CurriculoATS is designed for auditability
- Written reasoning for every score. Every 0-100 fit score comes with a plain-English paragraph explaining it. Creates an audit trail by default.
- Outcome-based signal categories. The model is trained to evaluate four structured categories (revenue, team, systems, problems) rather than arbitrary token weights. Auditors can verify the model is using expected signals.
- Human oversight built in. CurriculoATS augments recruiter decisions, not replaces them. Recruiters see the score and the reasoning, and can override either. The ATS never makes a hiring decision autonomously.
- Candidate notification support. The written reasoning can be shared with candidates who ask why they were scored the way they were.
What you should actually do
If you hire in NYC, California, Illinois, Maryland, Colorado, or for EU candidates, you need an AI-in-hiring compliance plan. Core steps:
- Inventory every AI-enabled hiring tool you use. Most ATS platforms use some form of automated scoring.
- Arrange an independent bias audit through a third-party compliance firm. Annually for NYC Local Law 144.
- Publish the audit summary where required.
- Notify candidates about AI screening and provide alternative processes when requested.
- Ensure human oversight is built into decisions. Never rely solely on AI scores to reject candidates.
For new ATS decisions, an outcome-based system with written reasoning is structurally easier to audit than a keyword-based one. CurriculoATS was built for this. The free plan is available with no credit card, so you can evaluate the AI reasoning on your own candidates before making any commitment.