The AI Compliance Checklist
Every People Ops Team Needs Right Now

Colorado, Illinois, and California are rolling out AI regulations that directly hit your HR tech stack. This is the no-nonsense checklist to get ready before enforcement begins.

Colorado SB 205 Illinois AI Video Act California AI Rules EEOC AI Guidance
See How QuestWorks Works

Colorado's AI Act enforcement begins February 1, 2026 — if you haven't started your impact assessment, you're already behind.

The AI Regulation Wave Hitting People Ops

If you're a CPO, VP of People, or HR leader at a tech company with employees in Colorado, Illinois, or California, your compliance obligations just changed. Three state-level AI laws are converging on a single reality: every AI tool in your People Ops stack needs to be audited, documented, and defensible.

This isn't theoretical. Colorado's SB 205 is already enforceable. Illinois has been enforcing its AI Video Interview Act since 2020. California's AI transparency requirements are advancing through the legislature with bipartisan support. And the EEOC has made it clear that Title VII applies to AI-driven employment decisions — nationwide.

The companies that get ahead of this don't just avoid fines. They build trust with candidates and employees, reduce litigation exposure, and gain a competitive advantage in recruiting. The ones that don't? They're one AI bias audit away from a headline they can't take back.

$20K
Per violation under Colorado's AI Act
73%
Of HR teams have no AI inventory
272
Days avg. sales cycle for enterprise HR tools

What Each Law Requires — Plain English

Skip the legalese. Here's what each regulation actually means for your team and what you need to do about it.

Colorado

SB 205 — Colorado AI Act

Any AI system used in "consequential decisions" — which includes hiring, promotions, compensation, and termination — requires a formal impact assessment and ongoing risk management.

  • Impact assessment documenting what AI does, what data it uses, and what risks it creates
  • Risk management policy with governance, testing, and monitoring procedures
  • Consumer notification when AI influences employment decisions
  • Bias testing to evaluate disparate impact across protected classes
  • Annual review of all high-risk AI systems
Enforcement: February 1, 2026
Illinois

AI Video Interview Act (AIVI)

If you use AI to analyze video interviews — including facial expression analysis, voice pattern analysis, or automated scoring — Illinois requires explicit consent and transparency.

  • Written notice to candidates that AI will analyze their video interview
  • Informed consent before the AI analysis occurs
  • Explanation of how the AI works and what characteristics it evaluates
  • Deletion on request within 30 days of a candidate asking
  • Limits on sharing video recordings with third parties
Enforcement: Active since January 1, 2020
California

AI Transparency & Accountability

California's evolving AI framework focuses on transparency, automated decision-making disclosure, and expanding CCPA/CPRA protections to cover AI-generated profiles and inferences about employees.

  • Disclosure when AI is used in employment decisions
  • Right to know what AI-generated inferences exist about an employee
  • Opt-out rights for automated profiling in certain contexts
  • Data minimization requirements for AI training data
  • Impact assessments for high-risk automated decision-making
Status: Multiple bills advancing through 2026 session
Federal

EEOC AI Guidance & Title VII

The EEOC has issued clear guidance: employers are liable for discriminatory outcomes from AI systems, even if the AI was built by a vendor. If your AI tool has disparate impact, you own the risk.

  • Disparate impact liability applies to vendor AI tools, not just in-house systems
  • Reasonable accommodation obligations extend to AI-mediated processes
  • Four-fifths rule applies to AI screening and selection rates
  • ADA compliance for AI-administered assessments and accommodations
Enforcement: Active — applies nationwide

Your 90-Day Compliance Roadmap

If you're starting from scratch, here's the sequence that gets you compliant without setting your People Ops roadmap on fire.

Week 1–2

AI Tool Inventory

Catalog every tool in your People Ops stack that uses AI or ML. Include ATS platforms, video interview tools, engagement surveys, team development software, performance management systems, and any tool that generates predictions, scores, or recommendations about employees.

Week 3–4

Risk Classification

For each AI tool, determine: Does it influence "consequential decisions"? What data does it collect? What outputs does it generate? Map each tool to a risk tier: high-risk (influences employment outcomes), medium-risk (generates employee insights), or low-risk (operational automation only).

Week 5–8

Impact Assessments

Conduct formal impact assessments for all high-risk AI systems. Document the AI's purpose, data inputs, decision logic, potential for bias, and mitigation strategies. This is the core deliverable for Colorado SB 205 compliance.

Week 9–10

Vendor Compliance Requests

Send formal AI transparency questionnaires to every vendor operating a high or medium-risk AI system. Request documentation of their bias testing, data handling, model transparency, and compliance certifications. Vendors who can't answer these questions are a liability.

Week 11–12

Policy & Governance

Publish your AI governance policy. Implement consumer notification workflows for AI-influenced decisions. Train your People Ops team on the new requirements. Set up annual review cadence for all high-risk systems.

The Behavioral Data Question

Here's the category that trips up most People Ops teams: behavioral data tools. Team-building platforms, engagement tools, coaching AI, collaboration analytics — they all collect behavioral signals. The compliance question isn't whether they use AI. It's whether the AI output feeds into employment decisions.

The regulatory framework creates a clear spectrum. On one end: tools that use AI to score individuals and feed those scores into performance reviews, promotion decisions, or termination risk models. These are firmly in the high-risk category and require full impact assessments, bias testing, and consumer notification.

On the other end: tools where AI facilitates team experiences but doesn't generate individual employment-relevant outputs. Think AI-generated team exercises, AI coaching that stays private, or team-level insights used for development rather than evaluation.

The Compliance Spectrum for Behavioral Data Tools

Higher Compliance Burden

  • AI generates individual performance scores
  • Behavioral data feeds into reviews or compensation
  • Manager dashboards show individual employee AI-derived insights
  • Engagement scores used in termination risk models
  • AI ranks employees against each other

Lower Compliance Burden

  • AI facilitates team experiences without scoring individuals
  • Insights stay at the team level for development, not evaluation
  • AI coaching conversations are private — no upstream reporting
  • Participation is voluntary and opt-in
  • No data feeds into employment decisions

Example: QuestWorks, a team dynamics platform, is architecturally designed to sit on the compliant end of this spectrum. Its AI coaching (HeroGPT) never shares conversations upstream. Leaders see aggregate team trends and strengths-based XP highlights — not individual performance scores. Participation is voluntary. And because it's a development tool rather than an evaluation tool, it falls outside the scope of "consequential decision" AI under current frameworks. It's a useful reference point for how behavioral data tools can be structured for compliance by design.

Questions to Ask Your Behavioral Data Vendors

When auditing tools in this category, these are the questions that separate compliant architectures from liability risks:

  • Does your AI generate outputs about individuals that could influence employment decisions?
  • Are behavioral insights aggregated at the team level or reported at the individual level?
  • Is AI coaching content shared with managers, HR, or any third party?
  • Is employee participation in the platform voluntary and opt-in?
  • Can employees access, export, or delete their behavioral data?
  • Has the AI been tested for disparate impact across protected classes?
  • Is the tool positioned as "development" or "evaluation" in its documentation?

Who Owns AI Compliance in People Ops?

This is where most organizations stall. Legal thinks it's an HR problem. HR thinks it's a legal problem. IT thinks it's everyone else's problem. Engineering built the AI and moved on.

The answer: People Ops owns the inventory and risk classification. Legal owns the impact assessments and policy language. IT owns the technical documentation and vendor questionnaires. The CPO or VP of People is the executive sponsor.

If you don't have a cross-functional AI governance working group by the end of Q1, you're going to be doing this reactively after an audit, a lawsuit, or a news cycle. All three are more expensive than doing it proactively.

The Compliance Stack for a 200-Person Tech Company

Here's what a well-structured AI compliance posture looks like at a mid-market tech company. It's less infrastructure than you think.

  • AI inventory spreadsheet — living document, updated quarterly, owned by People Ops
  • Impact assessment template — standardized format, completed per high-risk tool, owned by Legal
  • Vendor AI transparency questionnaire — sent during procurement and annually, owned by IT/Procurement
  • Employee notification templates — for AI-influenced decisions, owned by Legal
  • AI governance policy — published internally, reviewed annually, approved by CPO
  • Bias audit schedule — annual for high-risk systems, owned by People Analytics

Frequently Asked Questions

What AI laws affect People Ops and HR teams in 2026?

+

Three major state-level AI regulations directly impact People Ops: Colorado's SB 205 (AI Act), which requires impact assessments for any AI system used in "consequential decisions" including hiring and employment; Illinois' AI Video Interview Act, which mandates consent and transparency for AI-analyzed video interviews; and California's proposed AI transparency requirements, which require disclosure when AI is used in employment decisions. Federal guidance from the EEOC on AI and Title VII compliance also applies nationwide.

Does our team development software need to be AI-compliant?

+

It depends on how the software uses AI and what decisions it informs. If your team development tool uses AI to generate insights that could influence performance reviews, promotions, or employment decisions, it likely falls under state AI regulations. Tools that use AI purely for facilitation — like generating team exercises or coaching prompts — without producing data tied to individual employment outcomes typically face lower compliance requirements. The key question is: does the AI output feed into a "consequential decision" about an employee?

What is a "consequential decision" under Colorado's AI Act?

+

Under Colorado SB 205, a "consequential decision" includes any decision that has a material legal or similarly significant effect on a consumer's access to employment, education, housing, insurance, credit, or essential services. For People Ops, this covers hiring decisions, promotion decisions, performance evaluations that affect compensation, and termination decisions. If an AI system's output materially influences any of these decisions, it falls under the Act's requirements for impact assessments, risk management, and consumer notification.

How do I audit our existing HR tech stack for AI compliance?

+

Start by inventorying every tool in your People Ops stack that uses AI or machine learning. For each tool, document: what data it collects, what AI/ML models it uses, what outputs or recommendations it generates, and whether those outputs could influence employment decisions. Then map each tool against the specific requirements of applicable state laws. Request AI transparency documentation from each vendor. Finally, conduct or commission a bias audit for any AI system that influences consequential employment decisions.

What's the penalty for non-compliance with state AI laws?

+

Penalties vary by state. Colorado's AI Act empowers the Attorney General to enforce violations with penalties up to $20,000 per violation. Illinois' AI Video Interview Act carries penalties of $1,000 for the first offense and $5,000 for subsequent violations. Beyond statutory penalties, non-compliance creates significant litigation risk under existing anti-discrimination frameworks (Title VII, ADA, ADEA) if AI systems produce biased outcomes. The reputational cost of an AI bias scandal often dwarfs the regulatory fines.

Do behavioral data tools like team-building platforms need AI compliance?

+

It depends on the tool's architecture. Platforms that collect behavioral data and use AI to generate individual performance signals tied to employment decisions face the highest compliance burden. Tools that aggregate behavioral data at the team level for development purposes — without producing individual performance scores — generally fall in a lower-risk category. The critical distinction is whether the AI generates outputs about individuals that could influence consequential employment decisions. Team-level insights used purely for development, not evaluation, typically fall outside the scope of current regulations.

Compliance shouldn't be an afterthought.
Build trust by design.

QuestWorks is built for compliance from the ground up — aggregate team insights, private AI coaching, voluntary participation, and zero data feeding into employment decisions. See how it works.

Try it free