If you use AI to screen, shortlist, or rank candidates, the question is no longer whether your tools work efficiently. It is whether they hold up under scrutiny. Vendors call them neutral. Regulators are starting to ask for proof.
In 2026, using AI in hiring is a compliance decision before it is an operations decision. New laws are in effect in California, New York, and Illinois. A federal court has allowed a class action against an AI hiring platform to move forward. A second lawsuit is testing whether AI-generated candidate scores qualify as consumer reports under the Fair Credit Reporting Act. A federal attempt to preempt state AI laws was stripped from last year's budget reconciliation bill before it became law. The state laws are fully enforceable.
If your AI tool screens out protected candidates at higher rates, the legal risk falls on you. Even if you didn't build the tool. Even if you didn't design the model. Even if you weren't trying to discriminate.
KRESS clients are asking more questions about how these changes affect background checks, screening policies, and legal documentation. This guide covers what has changed, what employers need to do, and where the next round of risk is coming from.

New Laws Make AI Bias a Legal Liability
States are moving faster than federal agencies. California's regulations are in force. Illinois requires written consent for AI video assessments. New York City has been on the books since 2023. Colorado's AI Act takes effect this June. The state-by-state compliance landscape is shifting fast, and the rules differ enough that getting one state right does not guarantee compliance in another.
California: Live Since October 2025
California's Civil Rights Council regulations on automated decision systems took effect on October 1, 2025. They are among the most detailed in the country, and they apply to any employer with five or more staff that uses an automated decision system in hiring or employment decisions.
The rules require:
- Meaningful human oversight, with at least one person trained and empowered to override the AI.
- Proactive bias testing, not just outcome reviews after a complaint lands.
- Four-year recordkeeping covering inputs, outputs, criteria, and audit results.
- Reasonable accommodations where an AI system could disadvantage applicants based on protected traits.
If you hire in California and use any automated screening or scoring tool, your record-keeping needs to be airtight. Documented bias testing also strengthens your defense if a discrimination claim is ever brought.
New York City: Local Law 144
NYC Local Law 144 has been in force since 2023. The requirements:
- An independent bias audit before any automated employment decision tool is used, and annually thereafter.
- Notice to candidates at least 10 business days before the tool is used, including what attributes it evaluates and how to request a reasonable accommodation or alternative process.
- Audit summaries and a plain-language explanation of how the tool works, made publicly available.
A December 2025 audit by the State Comptroller found enforcement gaps significant enough that the city's enforcement arm has shifted to proactive investigations in 2026. If you have been treating Local Law 144 as loosely enforced, that window is closing. Penalties run from $500 to $1,500 per day per violation; a single missed audit can produce a $45,000 penalty over 30 days.
Illinois: Written Consent for AI Video Assessments
Illinois rules on AI in hiring require written, informed consent from candidates before using AI-based video assessments. Employers must explain how the data is being used and stored, and audit results must be made available on request. Complaints can lead to penalties or lawsuits, and the Illinois Human Rights Act amendments add civil penalties of $5,000 per violation, with each day of noncompliance treated as a separate offense.
Colorado: Coming June 30, 2026
Colorado's AI Act is the broadest state law on the books. It takes effect on June 30, 2026, after a delay from its original February date, and it covers any high-risk AI used in employment decisions. The law requires impact assessments before deployment and annually after, applicant notifications, an appeal process for adverse decisions, public disclosure of AI systems in use, and prompt reporting of algorithmic discrimination risks to the Attorney General.
If you hire in Colorado, the timeline is short and the requirements are substantial. There is also active legislative work on a repeal-and-replace framework, so the final shape of the law could shift before 30 June. Designate someone to track it.
Legal Standards Are Now About Impact
Your hiring AI does not need to be intentionally biased to break the law. If certain applicants are being rejected at higher rates, that result alone can trigger a complaint or a lawsuit.
"We picked a well-reviewed tool" no longer functions as a defense. The legal focus has shifted from how your system works to what it does. Courts and agencies want evidence that your tools perform fairly in practice.
It is more important than ever for your team to keep accurate records, especially when AI plays any role in screening decisions.
For state-specific employment screening compliance beyond AI hiring, KRESS state guides cover California, New York, Illinois, and Colorado in detail. Texas is also covered for employers tracking TRAIGA exposure.
Courts Are Catching Up to AI Discrimination
Two cases are worth knowing about. They use different legal theories. Both expand the legal exposure attached to AI hiring tools.
Mobley v. Workday: Now an Active Class Action
In Mobley v. Workday, applicants claimed an AI screening tool rejected them based on race, age, and disability. The case has moved considerably since the original filing.
In May 2025, a federal court granted preliminary nationwide collective certification under the Age Discrimination in Employment Act, potentially covering applicants aged 40 and over going back to September 2020. Workday itself disclosed that 1.1 billion applications were rejected through its software during the relevant period, so the collective could run into the hundreds of millions. In March 2026, a federal judge allowed the disparate impact age discrimination claims to proceed, rejecting Workday's argument that the ADEA's disparate impact protections apply only to employees and not to applicants.
This is now an active, advancing case with real stakes, not a cautionary footnote. The framework it establishes, if it holds at trial, applies to every employer using a third-party AI screening tool.
Eightfold AI: A Different Theory, a New Risk
In January 2026, two job applicants filed a class action against Eightfold AI on a theory that has nothing to do with disparate impact. The case, brought by Outten & Golden partner and former EEOC chair Jenny Yang, alleges that Eightfold scraped data on more than one billion workers, scored candidates on a hidden zero-to-five scale, and discarded low-ranked applicants before any human reviewed their files, all without the disclosures the Fair Credit Reporting Act requires for consumer reports.
The argument: AI-generated candidate scores qualify as consumer reports under the FCRA, which means candidates have the right to know about them, see them, and dispute them. If that argument holds, every AI hiring tool that produces a quantitative output and influences a hiring decision sits inside FCRA's disclosure regime.
This expands the risk surface beyond discrimination outcomes. It is no longer enough to ask whether your AI tool produces fair results. You also have to ask whether candidates can see what the tool is producing about them. Our note on when an AI hiring score becomes a consumer report covers the FCRA framework in more detail. If you want to check whether your current screening process holds up against the Act, our FCRA compliance walkthrough is a good starting point.
Vendor Risk Is Employer Risk
Workday was not the only defendant in Mobley. The employers who used the AI tool were also named.
If your provider's system leads to discrimination, you share the legal exposure. The practical takeaways:
- Ask for audit records.
- Clarify contract terms on data handling, scoring methodology, and tool design.
- Reserve the right to review or suspend AI tool use.
KRESS stands for human verification in background checks. Make sure your screening vendors take the same care. The flip side of the AI debate, how hiring teams are moving toward digital identity verification, is also worth tracking.

The Federal Preemption Question
Some employers have been holding off on AI compliance work in the belief that federal preemption is coming. As of mid-2026, it is not.
A 10-year moratorium on state AI laws was attached to last year's federal budget reconciliation bill. The Senate stripped it in a 99-1 vote before passage, and the bill became law in July 2025 without the preemption language. State laws are fully enforceable today. Even if a future preemption attempt succeeds, any litigation around it will take years to resolve. Build your compliance program around the strictest state requirements you are subject to. If preemption ever narrows the field, you will already be ahead. If it does not, you will be protected.
Why AI Bias Happens, and Why It's Hard to Fix
AI hiring problems rarely come from bad actors. Even well-intentioned companies often don't see the risks until a lawsuit or audit forces them to.
Biased Data Goes In, Biased Results Come Out
AI systems learn from past hiring data. If older hiring patterns favored certain groups, men in leadership roles, for example, AI may simply repeat those patterns.
There is no quick fix. You need to review both input data and outcomes regularly to spot patterns before complaints arise.
Explainability Is Not Guaranteed
Many systems operate as black boxes. They produce a score or a rank, but you cannot always trace why the decision came out the way it did.
That becomes a problem when records are subpoenaed or regulators ask how your hiring system works. Your team must be able to explain why candidate A was rejected and candidate B was not.
Privacy and Consent Still Matter
AI systems often pull from resumes, assessments, third-party data, and videos. In states like Illinois, that raises three direct questions:
- Was the applicant informed?
- Did they actively agree?
- How long is the data stored?
The Eightfold case adds another dimension. Even when an applicant has consented to the hiring process broadly, they may have a separate right under the FCRA to see the AI-generated score that influenced the decision.
Documentation Breakdowns Are Common
Many HR systems are not built to log how automated decisions are made. That is a problem. You may need to show regulators:
- What input data you used.
- How decisions were calculated.
- Who reviewed the results.
What Employers Must Start Doing Now
Compliance does not wait for the next lawsuit. Your hiring process needs fixes today.
Audit All Automated Decision Tools
Review your AI tools quarterly. That includes anything that ranks, scores, filters, or sorts candidates. Better yet:
- Use outside reviewers trained in EEOC standards.
- Compare outputs across age, sex, race, and disability.
- Track false negatives, that is, qualified candidates rejected by mistake.
Keep Humans in Final Decisions
You should never let AI make final hiring calls on its own. Add a human decision layer:
- Require a staff member to approve all rejections.
- Record what information they reviewed.
- Keep time-stamped logs for every action.
AI can support hiring. It should never replace sound judgment.
Strengthen Contracts With AI Vendors
Do not accept a "black box" vendor contract. You need safeguards written into every agreement:
- Require routine bias testing.
- Reserve the right to audit and pause tool use.
- Demand clear reports on how decisions are made.
Avoid blind trust. How to ace your next compliance audit depends on written protections.
Log and Store Everything
You must be able to defend your decisions long after an offer is accepted or rejected:
- Save the version of the AI tool used for each hire.
- Log what data went into the tool and how it weighed outcomes.
- Record all human reviews.
Documented human verification reduces compliance risk significantly. AI cannot explain itself in court. Your logs need to.

How KRESS Helps You Stay Compliant With AI in Hiring
We treat background screening and technology compliance as parts of the same hiring decision, because that is how courts and regulators are treating them.
KRESS puts real people into complex decisions:
- No automatic rejections based on any one input.
- Reviewers check every flagged concern.
- Clients receive guidance on notices, consent, and disputes through our automated adverse action workflow.
Background checks are more than criminal records. They are legal decisions that, when handled poorly by AI, become liability triggers.
Frequently Asked Questions
What is AI bias in hiring?
AI hiring bias happens when automated systems screen out candidates unfairly based on age, race, gender, disability, or other legally protected traits.
Who is responsible if AI bias leads to discrimination?
Both the software vendor and the employer can be held responsible. 2026 regulations and court rulings treat AI outcomes just like traditional employment decisions.
What laws apply to AI hiring tools?
Laws in California, New York (Local Law 144), Illinois (HB 3773), and, from 30 June 2026, Colorado all apply directly. Texas has its own AI governance act with a lighter touch. Federal laws such as Title VII and the ADEA still regulate discrimination, regardless of whether a human or a machine produced the decision.
How can companies reduce their risk?
- Audit bias outcomes quarterly, independently if possible.
- Keep full documentation trails.
- Maintain human decision-makers at every stage.
- Vet vendors carefully.
What is an AEDT?
An AEDT is any tool that scores, filters, or ranks candidates automatically. These tools are subject to audit and disclosure under the new state laws.








