Contact
Get a Quote
Get Started
Get My Background Check

When Does an AI Hiring Score Become a Consumer Report?


If an AI platform gathers outside data, builds a candidate profile, and assigns a ranking that filters applicants out. Would that qualify as a consumer report?

A proposed class action, Kistler v. Eightfold AI, filed on January 20, 2026, tests that exact question.

close-up-of-laptop-with-abstract-glowing-ai-chip-o-2026-01-11-08-48-20-utc.jpg


What the Lawsuit Alleges & Why It Matters

The complaint alleges that Eightfold AI:

  • Gathered outside data from sources like LinkedIn, GitHub, Stack Overflow, and public databases
  • Built candidate profiles
  • Assigned candidates match scores on a 0 to 5 scale predicting their 'likelihood of success.’
  • Generated rankings that were used to filter the candidates

The complaint describes these rankings as functioning like consumer reports and states that the company skipped the disclosure, authorization, and adverse action steps required by the FCRA.

This creates the issue: If a candidate is unaware of what information was used or whether it was accurate, how can they correct it?


Exposure

So, if a third party gathers information about a candidate, compiles that information into a score/ranking, and you use that output to make hiring decisions, the line between "recruiting software" and "consumer report" can blur.

It doesn’t matter what the product is called; it matters what the product does. Even if a human reviews a shortlist later, the candidates filtered out by the algorithm never reach that review.

Under FCRA, consumers have the right to dispute inaccurate information found in consumer reports. The legal definition of a “consumer report” is information collected by a third party that is used for employment decisions. FCRA violations carry statutory damages of $100 to $1,000 per violation, and some states may also allow statutory damages (The plaintiffs are also seeking damages under California's Investigative Consumer Reporting Agencies Act (ICRAA), which allows for statutory damages up to $10,000 per violation). When thousands of applicants go through the same workflow, exposure adds up fast.

While AI tools can assist in sourcing and engagement. The operational question is narrower: At what point does the tool influence an employment decision?


The Defensibility Test

If you use AI ranking tools, run this operational test.

  1. Does the tool pull data from third-party sources outside your applicant tracking system and the candidate's own application?
  2. Does it create a score, rank, or recommendation that is used to screen candidates out?
  3. Do you provide applicants with a disclosure and collect written authorization before that screening happens?
  4. If someone is rejected based on the output, do you send them:
    1. pre-adverse action and adverse action notices?
    2. A copy of the report?
    3. A summary of their rights?
  5. If a candidate questions the score, rank, or recommendation, is there:
    1. A defined dispute intake process?
    2. A documented investigation?
    3. A recorded resolution?

If you answered "no" or "I'm not sure" to any of these are unclear, your workflow will be harder to defend.


Where KRESS Fits

When you order background checks through KRESS:

  • Disclosure delivery is documented
  • Authorization is timestamped and retained
  • Searches are verified and quality-checked
  • Data sources are recorded
  • Results are logged
  • Status changes are tracked
  • Exception decisions are documented
  • Pre-adverse and adverse action steps are recorded

There is always an audit trail record. No assumptions. No unclear procedures.


AI Scoring Tools vs. CRAs like KRESS

What you need to knowAI scoring/ranking toolsCRAs like KRESS
What you get backA score, match rating, rank, shortlist, or recommendation.A documented screening result with searches run, source references, results returned, timestamps, and status history.
Where the information comes fromVaries by vendor and setupPrimary sources (courts, law enforcement, state agencies, employers, schools, e.t.c).
Can we explain it laterNot always. It can be hard to list every input and why a score changed.Yes. Sources and searches are documented: what was checked, when, and what came back.
Did the candidate authorize it?VariesYesAuthorization is captured and timestamped before screening. Records are retained.
Where the “screen out” happensCan be blurry: scores influence who gets seen, but may be treated as “just a tool.”Clear decision checkpoints: what is run, what is returned, what happens next.
What we can show in an auditOften limited to platform logs and score history; not always structured for screening scrutiny.Audit-ready trail: authorization, searches, results, status changes, and exception handling.
If someone is rejected, what triggers the noticesOften manual. HR has to recognize the trigger and run the process outside the tool.Pre-adverse and adverse action steps are automated as part of the screening process when applicable.
If a candidate disputes it, what happensDispute path is often unclear, especially for inferred traits or outside signals.Defined dispute path: intake, investigation, corrections, and documented resolution.
Who takes the candidate callsOften HR and recruiting ops when candidates ask “why” or “what is missing.”KRESS supports applicants so HR is not the call center.
When something looks off, who owns itShared and sometimes unclear; escalation can feel like ticket roulette.Named ownership + escalation path. Issues are flagged, routed, and tracked to resolution.
What disappears from HR’s plateSome sourcing work; can create extra work when transparency/disputes show up.Fewer follow-ups, fewer re-requests, less status chasing because steps are owned and logged.

Join our Newsletter

Sign up for our monthly roundup of HR resources and news