Regulators have noticed that applicant tracking systems make decisions that affect people’s lives. In the last three years, two major laws reshaped the landscape. NYC Local Law 144 requires independent bias audits for automated employment decision tools. The EU AI Act classifies employment AI as high-risk and demands transparency and human oversight.
If you hire in New York City, California, Illinois, Maryland, Colorado, or for EU candidates, you already have a compliance problem or you will soon. The question is whether your ATS is architecturally ready for it.
Both NYC Local Law 144 and the EU AI Act expect the AI’s decision to be explainable and auditable. You need to be able to say, for any specific candidate, why the AI scored them the way it did.
Why legacy ATS keeps failing the explainability test
Most legacy ATS platforms, including Greenhouse, Lever, Workable, and Ashby, use some form of keyword-based matching under the hood. Boolean token extraction, TF-IDF scoring, embedding-based semantic similarity. These methods are computationally straightforward. They’re also hard to audit.
If Greenhouse tells you Candidate A scored 82 and Candidate B scored 47, neither system can tell you why in plain English. The reasoning lives in floating-point vectors and keyword weights. When a candidate asks “why was I screened out,” there’s no human-readable answer. When an auditor asks “show me the decision logic for this population,” you can only sample statistically.
What the laws actually require
NYC Local Law 144 (effective 2023). Applies to any employer using automated employment decision tools to screen candidates for NYC jobs. Requires annual bias audits by an independent auditor, public disclosure of audit results, candidate notification that AI is in use, and alternative processes for candidates who request them. Fines start at $500 per violation.
EU AI Act (2024). Classifies employment AI as high-risk. Requires transparency and explainability, human oversight, and robustness/accuracy standards. Phase-in through 2025-2027. Fines up to 7% of global annual turnover or €35 million, whichever is higher.
California, Illinois, Maryland, and Colorado all have AI-in-hiring legislation either passed or proposed. The direction is consistent: transparency, human oversight, and auditability are becoming the legal floor.
What outcome-based AI changes
CurriculoATS is designed with structural transparency from day one. For every candidate, the AI produces two things: a 0-100 fit score, and a full written reasoning paragraph explaining why the candidate received that score.
A sample reasoning paragraph reads like this:
Candidate scored 84 out of 100. Strong alignment on the “systems shipped” signal — led migration of fraud detection service to Kubernetes handling 2M events/sec in production. Partial alignment on “team scope” — managed 3 direct reports for 18 months. No evidence for “revenue impact.” Resume lacks specificity on recent achievements at Company B, which limited the score.
When a recruiter asks why the score is 84, the paragraph tells them. When an auditor reviews a population of decisions, they can read 100 reasoning paragraphs across a protected-class population. When a candidate asks for explanation, the structured reasoning exists and can be shared.
What this means structurally
We’re not claiming CurriculoATS is “certified compliant” with any specific regulation. Compliance requires an independent audit in the context of each specific deployment, and that audit has to be redone annually under NYC Local Law 144. What we can say is that the architecture is designed to pass an audit rather than fight one.
A system that writes reasoning for every decision can be:
- Audited statistically for bias across protected populations
- Reviewed individually by recruiters for decision quality
- Explained to rejected candidates in plain English
- Corrected when it gets things wrong, because recruiters can see the reasoning and override
A keyword-based system can do none of these things natively.
Human oversight is still critical
Written AI reasoning doesn’t replace human review. Both NYC Local Law 144 and the EU AI Act emphasize that AI should be one input into hiring, not the decision-maker. CurriculoATS is designed to work the same way: the AI scores and reasons, the recruiter reviews the ranked inbox and makes the call. The AI is a force multiplier for human judgment, not a substitute.
What you should actually do
If you hire in NYC and use automated screening, you need to comply with Local Law 144 regardless of which ATS you use. The typical process:
- Check whether your ATS uses automated scoring. Almost all of them do.
- Arrange an independent bias audit through a third-party compliance firm.
- Publish the audit summary and notify candidates.
- Offer alternative screening paths for candidates who request them.
If you’re setting up new screening and care about auditability from day one, an outcome-based system with written reasoning is structurally easier to audit than a keyword-based one. That’s the entire case for CurriculoATS from a compliance angle.
Related reading: AI Resume Screening methodology, Impact Scoring engine, fair AI hiring pillar page, or the free plan to evaluate it yourself.