If you're hiring in New York, in any state with an AI hiring rule, or with any AI tool in your screening stack, the compliance map looks very different in 2026 than it did when the New York social media law took effect in March 2024.
Two years on, three things have changed:
- The first major class action against an AI hiring platform is now in court. The lawsuit alleges the platform compiled hidden reports on more than a billion workers without the FCRA disclosures the law has required for decades.
- New York City has had its bias-audit law (Local Law 144) audited by the State Comptroller, and the audit found enforcement to be ineffective. The city has agreed to fix it, which means more enforcement is coming.
- Colorado and Illinois have both passed broad AI-in-employment laws, joining New York's existing rules. Colorado's law is currently in court but scheduled to take effect 30 June 2026. Illinois's amendment to the Human Rights Act took effect on 1 January 2026.
This guide brings the original New York social media explainer up to date with all of that, lays out what is and isn't allowed when you screen candidates online, and explains why the AI vendor you choose may now be your single largest hiring-compliance risk.

What the New York social media law says
New York's Social Media Privacy Law applies to private and public sector employers. It covers anyone hiring or supervising someone who lives or works in the state, with no job-role exemption, from entry-level hires to C-suite candidates.
| Prohibited action | What it looks like in practice |
|---|---|
| Asking for social media usernames or passwords | Login requests during interviews, offers, or onboarding |
| Watching a candidate or employee access a private account | Asking them to open Instagram during an interview, or any form of coerced access |
| Requesting they connect with you to view protected profiles | Friend requests, follow requests, or any workaround that gets you past the privacy setting |
| Saving, copying, or redistributing private posts | Screenshotting or sharing content from a non-public account |
| Retaliating against anyone who refuses | Rescinding an offer, demoting, or firing because the person said no |
| Acting on private material you weren't entitled to see | Letting something you saw informally influence the hiring decision |
What is a "personal account"?
The law protects any account used primarily for non-work, personal communication. Think:
- Threads
- Snapchat
- TikTok
- X (previously Twitter)
Even if someone casually mentions their job, if it's a personal page and not tied to your organization's branding or systems, it's private. Business accounts or employer-managed tools (like a team-run customer support profile) are not protected.
What you can still do (legally and effectively)
The law creates limits, it doesn't put social media off limits entirely. Public content remains reviewable; it just requires care.
Public profiles are still accessible. You're still allowed to:
- Check fully public posts and comments
- View responses, reposts, and interactions shared freely
- Search candidate names in search engines
But keep these filters in place:
- Is the post relevant to job duties?
- Am I applying the same process to all candidates?
- Could this lead to assumptions based on protected traits?
Your screening policy should be equal, consistent, and job-related. For more on what defensible public-content review looks like in practice, see Social Media Screening Done Right.

Company or work-related accounts are still reviewable. You're within your rights to request access to and review any account:
- Created for a work function or employer-managed effort
- Hosted on systems you control (for example, Slack or Notion)
- Representing your brand in customer-facing ways
Just be sure these policies are documented and disclosed upfront.
What changed in 2026: AI is now part of the conversation
When the New York social media law was written, the assumption was that a human screener might look at a candidate's public posts and apply judgment. In 2026, that assumption is wrong in a lot of organizations. The screening is happening through an AI tool, often before any human in your company sees the candidate at all.
That changes the legal exposure significantly. And it is now being tested in court.
The Eightfold AI lawsuit: AI hiring tools as "consumer reports"
In January 2026, a class action was filed against Eightfold AI in federal court in California, brought by Outten & Golden LLP and led by Jenny R. Yang, a partner at the firm and former Chair of the U.S. Equal Employment Opportunity Commission.
The complaint alleges that Eightfold:
- Scraped personal data on more than one billion workers, including social media profiles, location data, internet activity, and tracking data well beyond what candidates submitted
- Funneled the data through a proprietary large language model that scored candidates on a hidden zero-to-five scale
- Discarded lower-scoring candidates before any human at the employer ever saw their application
- Did none of the FCRA basics: no disclosure that a report existed, no copy of the report to the candidate, no way to dispute errors
The legal theory is that an AI-generated candidate score functions as a consumer report under the federal Fair Credit Reporting Act. KRESS has covered this directly in When Does an AI Hiring Score Become a Consumer Report?, which is worth a read alongside this article.
If the court agrees, every employer using an AI hiring tool that scores candidates from third-party data is potentially in scope, not just the vendor. Eightfold has denied the allegations and says its platform "operates on data intentionally shared by candidates or provided by our customers" (HR Dive coverage).
For employers, the practical takeaway doesn't depend on how the case ultimately resolves. The lawsuit makes the question current: if the AI tool you use was found to be acting as a consumer reporting agency, would your screening process hold up under FCRA? That now needs an answer.
AI vendor liability is no longer theoretical
Courts and regulators are increasingly treating AI vendors as agents of the employer. The plain-English version: the vendor's algorithm is the employer's legal problem.
Most AI hiring tools share a common shape:
- The data is scraped or pulled from sources you can't fully audit
- The scoring runs through a model whose logic isn't fully explainable
- The output influences who gets to the human review stage and who doesn't
You are responsible for the outcome, but you can't see the data, can't interrogate the logic, and often can't even produce the report the candidate is entitled to under FCRA. That gap is where the lawsuits are landing.
This is structurally different from a screening provider that runs documented searches, against named sources, with a human reviewer signing off on the result. The first is a black box. The second is a record.
The state map is widening fast
New York's social media rules are part of a much bigger picture in 2026. Three updates to the existing state list:
Illinois (effective 1 January 2026)
Illinois House Bill 3773 amends the Illinois Human Rights Act to expressly prohibit employers from using artificial intelligence that "has the effect of subjecting employees to discrimination on the basis of protected classes." The prohibition covers AI used in recruitment, hiring, promotion, training, discharge, discipline, tenure, or any term or condition of employment. Employers also have to notify candidates and employees when AI is being used in those decisions. Enforcement runs through the Illinois Department of Human Rights and the state Human Rights Commission, with private lawsuits available after the administrative process. Damages are uncapped.
Colorado (scheduled 30 June 2026, currently subject to litigation)
Colorado's AI Act (SB 24-205) is scheduled to take effect 30 June 2026, after a one-off legislative delay from the original February 2026 date. For employers using high-risk AI in employment decisions, it requires a Risk Management Policy, an initial impact assessment within 90 days of the effective date, and annual reassessment thereafter.
The law also requires consumer notification: before a high-risk AI system is used in a consequential employment decision, the candidate or employee has to be told the system is in use, the purpose of the system, contact information for the deployer, and a plain-language description of how it works. Smaller deployers may qualify for a partial exemption (deployers with fewer than 50 full-time employees can avoid some requirements if they don't use their own data to train or substantially customize the AI, limit use to the developer's disclosed purposes, and pass the developer's impact assessment through to consumers), but above that threshold the full obligations apply.
The honest caveat: as of late April 2026, xAI sued Colorado in federal court to block the law, and the U.S. Department of Justice intervened in support of xAI on 24 April 2026. The court has indicated it will not enforce penalties until it rules on a preliminary injunction. Whether the law actually goes live on 30 June, in some modified form, or at all, is currently in the hands of a federal judge. Don't assume it's dead, and don't assume it's locked in either.
NYC Local Law 144 enforcement is sharpening
NYC has had its automated-employment-decision-tool law (Local Law 144) since 2023. What's new is that in December 2025 the New York State Comptroller audited DCWP's enforcement and concluded it was "ineffective". Specifics from the audit:
- Of 12 test calls auditors made to NYC's 311 system to file an AEDT complaint, only 3 (25%) were correctly transferred to DCWP
- DCWP's own review of 32 companies identified 1 case of non-compliance; the Comptroller's auditors looking at the same companies identified 17 potential violations
- DCWP agreed to most of the 13 audit recommendations and committed to a more proactive enforcement posture in 2026
Civil penalties under LL144 can run up to $1,500 per violation per day. If you've been treating LL144 as a paper requirement because enforcement felt sleepy, that calculus is changing.

The other states still on the social-media list
New York is far from alone. As of 2026, 28 U.S. states have laws limiting employer access to employee or applicant private social media accounts, up from around 20 when this article was originally drafted. Delaware, Maine and Virginia are among the more recent additions, and Oregon has amended its existing law.
Examples of states with employer-access social media restrictions include Arkansas, California, Colorado, Connecticut, Delaware, Illinois, Louisiana, Maine, Maryland, Michigan, Montana, Nevada, New Jersey, New Mexico, New York, Oklahoma, Oregon, Pennsylvania, Utah, Virginia, West Virginia and Wisconsin. Each has slightly different definitions of access and coercion: one state might ban even suggesting connection requests; another only prohibits login requests.
Before you screen, check the local rules. KRESS maintains a state-by-state compliance guide covering background-check obligations across every U.S. state.
The Stop Hiding Hate Act, updated
The original version of this article flagged New York's "Stop Hiding Hate Act" as a 2025 law. As of mid-2026, that law is in effect. Social media platforms operating in New York with more than $100 million in gross annual revenue must publish their content moderation policies and submit biannual reports to the state Attorney General's office. First reports covering Q3 2025 activity were due 1 January 2026. X Corp sued the state in federal court in June 2025 challenging the law on First Amendment grounds; the case is still pending and the law is still operative. Civil penalties run up to $15,000 per violation per day.
For employers, this doesn't change your obligations directly, but it does mean platforms are now required to disclose how they moderate hate speech and abusive content. You may see more flagged content as a result. When that lands in front of a screener, the same filters apply: focus on what's relevant to the job, consider how recent or serious the behavior actually is, and avoid acting on content that's unsettling but unrelated.
How KRESS handles social media screening (and why the AI question matters here)
At KRESS, social media screening is a compliance tool, not a shortcut. The differences from a typical AI hiring platform are intentional:
- Public information only. No scraping, no data acquired without the candidate's knowledge.
- Profile ownership confirmed before any review. False matches don't end up in the report.
- Filter for job-linked behavior only. Not every public post matters; the ones that don't relate to the role don't enter the assessment.
- Real, trained reviewers. Not an automated score, not a model output, not a hidden ranking. More on why this matters in The Importance of Human Verification in Background Checks.
- Reports are documented and disclosable. Candidates can see what was reviewed and challenge errors.
A teacher's content gets filtered differently from an oil and gas contractor's. C-suite background checks call for different scrutiny again. KRESS adjusts based on industry, role level, and company culture, then leaves an audit trail you can defend. For a deeper read on what compliant social media screening looks like in practice, see Protect Your Company with Compliant Social Media Screenings.

Common employer mistakes to avoid
Well-meaning teams still get themselves into trouble. The most common patterns:
- Using tools that scrape private data without consent
- Sending friend requests to gain visibility into a private profile
- Acting on content tied to protected characteristics (race, disability, religion, and so on)
- Letting an AI tool make the screening cut without an FCRA-compliant disclosure to the candidate
- Treating a "multi-state" tool as one-size-fits-all without configuring for local restrictions
Intent doesn't help here. Compliance lives in the process.
Final compliance checklist for 2026
You can protect your people and your brand. The standards haven't changed dramatically; the consequences for ignoring them have.
- Never ask for login credentials or private access
- Stick with fully public, verified information
- Document your screening process internally
- Train hiring managers to follow the same policy across locations
- Focus only on content that relates to job responsibilities, ethics, or safety
- If you use any AI tool in your screening or candidate-ranking workflow, confirm in writing whether the vendor considers themselves a consumer reporting agency under FCRA, and whether their output is treated as a consumer report. If they say no, get a clear explanation of why
- Where state AI laws apply (Illinois now, Colorado pending), confirm you have notice to candidates and a documented impact assessment in place
When you're unsure, don't go hunting alone. Work with a partner designed to handle the legal balance.
Ready to screen without slipping up?
Screening can be smart, compliant, and defensible at the same time. KRESS gives you transparency without legal gray areas, and helps you document everything for audit safety. If you're hiring in New York, across multiple states, or with any AI tool in your stack, the compliance picture is more complicated in 2026 than it was a year ago. We'll walk you through it.
Get in touch with the KRESS team to build a compliant social media screening plan.
Frequently asked questions
What does New York's 2024 social media law restrict employers from doing?
You can't ask candidates or employees for passwords or access to personal accounts. Live monitoring and coerced friend requests are also banned.
Can employers still screen social media in New York in 2026?
Yes, but only public data. No login access, no DMs, and no private profile views unless the post is viewable without a login.
Is it legal to ask employees to connect on social media?
In New York, no. This is now considered a privacy violation.
What counts as a public social media post?
Anything visible without login or approval. If a stranger can see it online, it's likely public.
Is my AI hiring tool a "consumer reporting agency" under FCRA?
The Eightfold class action filed in January 2026 is the first major test of this question. Until courts resolve it, the safest assumption is that any tool that scrapes third-party data and produces a candidate score is at risk of being treated as a CRA. That means your obligations include disclosure to the candidate, the candidate's right to see the report, and an adverse action process if the score influences your decision.
For a longer breakdown, see When Does an AI Hiring Score Become a Consumer Report? and How Employers Should Handle Consumer Reports.
What about state AI laws?
Illinois prohibits AI hiring tools with discriminatory effects as of 1 January 2026, and requires notice to candidates when AI is used. Colorado's broader AI Act is scheduled for 30 June 2026 but is currently subject to a federal lawsuit and may be delayed or modified. NYC Local Law 144 (bias audit requirements for AEDTs) has been in force since 2023, with enforcement now sharpening following a December 2025 State Comptroller audit.
How can companies stay compliant across states?
Build a centralized screening method based on public information, configure for state-by-state additions, document everything, and train hiring managers to follow the same policy regardless of where the candidate sits.
What makes KRESS's social media screening legally reliable?
Public information only. Profile ownership confirmed. Filtered for job relevance. Human-reviewed, not algorithm-scored. No scraping. No hidden rankings. The report is something a candidate could see and a regulator could audit.








