Colorado SB 24-205 · Signed May 17, 2024

The Colorado AI Act — Complete Plain-Language Guide

Colorado SB 24-205, the Consumer Protections for Artificial Intelligence Act, is the first comprehensive AI governance law in the United States. If your company uses AI to make decisions affecting Colorado residents in employment, lending, housing, healthcare, or education — this law applies to you. Enforcement begins June 30, 2026.

June 30, 2026Enforcement Date
Up to $20,000Per Violation (CCPA § 6-1-112)
All CompaniesNot Just Colorado HQ
Enforcement Countdown
Colorado SB 24-205 Takes Effect
--Days
--Hours
--Min
--Sec
Enforced exclusively by the Colorado Attorney General. Source: leg.colorado.gov/bills/sb24-205
The Law

What is Colorado SB 24-205?

Colorado Senate Bill 24-205 — formally the Consumer Protections for Artificial Intelligence Act — was signed by Governor Jared Polis on May 17, 2024. It is the first comprehensive state-level AI governance law in the United States.

Source: Colorado General Assembly — SB24-205 Bill Page

The law was originally set to take effect February 1, 2026. A follow-on bill, SB25B-004, extended the enforcement date to June 30, 2026.

Source: Colorado General Assembly — SB25B-004

The law applies to any company doing business in Colorado that develops or deploys high-risk AI systems making consequential decisions affecting Colorado residents — regardless of where the company is headquartered.

§ 6-1-1703(1) — Deployer Duty · Signed Act PDF (leg.colorado.gov)

"On and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination."

The law places obligations on two distinct actors: developers (those who build or substantially modify AI systems) and deployers (those who use AI systems to make consequential decisions). A company can be both simultaneously. Using a third-party AI tool does not transfer your obligations as a deployer to the vendor — you remain responsible.

Legal Definitions

Key Terms from the Law

These definitions come directly from § 6-1-1701 of the signed act. They determine whether and how the law applies to your organization.

Source: SB 24-205 Signed Act — § 6-1-1701 (Pages 1–5)
Developer
A person doing business in Colorado that develops or intentionally and substantially modifies an artificial intelligence system. Includes companies that fine-tune, retrain, or significantly adapt a foundation model for a specific deployment. (§ 6-1-1701(7))
Deployer
A person doing business in Colorado that deploys a high-risk AI system. Compliance obligations fall on the deployer regardless of who built the underlying system. You cannot outsource this obligation to a vendor. (§ 6-1-1701(6))
High-Risk AI System
Any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The system must have been specifically developed and marketed, or intentionally and substantially modified, for that purpose. (§ 6-1-1701(9)(a))
Consequential Decision
A decision with a material legal or similarly significant effect on the provision or denial of: education enrollment, employment opportunities, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. (§ 6-1-1701(3))
Algorithmic Discrimination
Any condition where a high-risk AI system results in an unlawful differential treatment or impact that disfavors an individual or group based on age, color, disability, ethnicity, genetic information, national origin, race, religion, reproductive health, sex, veteran status, or other protected classification. (§ 6-1-1701(1)(a))
Substantial Factor
A factor that assists in making a consequential decision, is capable of altering the outcome, and is generated by an AI system. Includes any AI-generated content, decision, prediction, or recommendation used as a basis for a consequential decision. (§ 6-1-1701(11))
Explicitly excluded from "high-risk AI system": Anti-fraud technology (without facial recognition), antivirus, calculators, cybersecurity tools, databases, data storage, firewalls, internet domain registration, website loading, networking, spam filtering, spell-checking, spreadsheets, web caching, web hosting — provided these technologies do not make or substantially facilitate a consequential decision. (§ 6-1-1701(9)(b))
Real-World Scenarios

How Companies Are Already Non-Compliant

The following scenarios describe how companies operating AI tools today would be in direct violation of SB 24-205 as of June 30, 2026. Each scenario references the specific section of the signed act it violates. All section references link to the official signed law.

Violation
AI resume screening tool deployed with no impact assessment
+
Violates § 6-1-1703(3)(a)(I) — Impact Assessment Requirement (Page 11)

A company uses an AI-powered tool to screen resumes and rank job candidates. No impact assessment has ever been completed. There is no documentation of training data, known bias risks, or discrimination mitigation strategies.

Under SB 24-205: Deployers must complete an impact assessment before deployment and annually thereafter. The assessment must document the system's purpose, nature of consequential decisions it influences, known discrimination risks, training data categories, and mitigation strategies. Complete absence of documentation is a violation on day one of enforcement.

Violation
Automated loan denials with no AI disclosure and no path to appeal
+
Violates § 6-1-1703(4)(a)(I) — Consumer Disclosure; § 6-1-1703(4)(b)(III) — Right of Appeal (Pages 13–14)

A lending platform uses an AI model to make instant approval and denial decisions. Applicants who are denied receive a generic rejection notice with no mention that an AI made the decision and no mechanism to request human review.

Under SB 24-205: Two separate violations. Consumers must be notified before or at the time a high-risk AI system makes a consequential decision about them. Additionally, consumers must be given the opportunity to appeal adverse decisions and request human review where technically feasible.

Violation
AI tenant screening tool with no written risk management policy
+
Violates § 6-1-1703(2)(a) — Risk Management Policy and Program (Pages 9–10)

A property management company deploys an AI tool that scores prospective tenant applications and outputs approval or rejection recommendations. The company has no written AI governance policy, no risk management program, and has never assessed whether the system produces discriminatory outcomes.

Under SB 24-205: Deployers must implement a documented risk management policy and program specifying the principles, processes, and personnel used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. It must be an iterative process that is regularly reviewed and updated. A system screening tenants with zero governance documentation is a direct violation.

Violation
Third-party AI used for HR decisions — company assumes vendor handles compliance
+
Violates § 6-1-1703 — All Deployer Obligations (Pages 9–17)

A company subscribes to a workforce management SaaS product that uses AI to recommend performance ratings and surface termination candidates. The company assumes the software vendor is responsible for compliance. No internal assessments, disclosures, or governance documentation exist.

Under SB 24-205: The law places compliance obligations on the deployer — the company using the AI — regardless of who built it. Buying a third-party AI tool does not transfer your legal obligations to the vendor. You are responsible for the impact assessments, consumer disclosures, governance policies, and appeal mechanisms.

Violation
AI developer with no public governance statement on their website
+
Violates § 6-1-1702(4)(a) — Developer Public Statement Requirement (Page 8)

A company develops and licenses an AI clinical decision support tool used by hospitals. The company's website has no publicly available statement describing the types of high-risk AI systems they offer or how they manage known risks of algorithmic discrimination.

Under SB 24-205: Developers must publish a clear and readily available public statement on their website — or in a public use case inventory — summarizing the types of high-risk systems they make available and how they manage discrimination risks. The statement must be updated within 90 days of any intentional and substantial modification.

Violation
Deployed AI system with no annual review since initial launch
+
Violates § 6-1-1703(3)(g) — Annual Review Requirement (Page 13)

An insurance company deployed an AI underwriting model three years ago. An initial risk assessment was completed at launch. The model has since been updated multiple times. No annual review has ever been conducted.

Under SB 24-205: At least annually, deployers must review each high-risk AI system to ensure it is not causing algorithmic discrimination. Additionally, a new impact assessment is required within 90 days of any intentional and substantial modification to the system. A stale initial assessment on an actively updated model does not satisfy the law.

Violation
Consumer-facing AI system that doesn't disclose it is AI
+
Violates § 6-1-1704(1) — AI Disclosure to Consumer (Page 17)

A company deploys a customer service chatbot that handles insurance claim inquiries and eligibility determinations. The bot responds in natural language and is not identified anywhere as an AI system. Consumers interact with it believing they are communicating with a human.

Under SB 24-205: Any person doing business in Colorado that deploys an AI system intended to interact with consumers must ensure each consumer is told they are interacting with an AI system. Disclosure is not required only where it would be obvious to a reasonable person that they are interacting with AI. (§ 6-1-1704(2))

What Compliance Requires

The Full Deployer Checklist

To satisfy SB 24-205 as a deployer of high-risk AI systems, every item below must be in place before June 30, 2026. Each requirement links directly to the relevant section of the signed act.

01

Written Risk Management Policy and Program

A documented, implemented policy covering identification, assessment, and mitigation of algorithmic discrimination risks. Must be an iterative process, regularly reviewed and updated over the life cycle of each high-risk AI system.

§ 6-1-1703(2)(a) — Signed Act Pages 9–10
02

Algorithmic Impact Assessment Before Deployment

Must document the system's purpose, intended use cases, deployment context, known discrimination risks, categories of data processed, performance metrics, transparency measures taken, and post-deployment monitoring plans.

§ 6-1-1703(3)(a)(I) and (3)(b) — Signed Act Pages 11–12
03

Annual Review of Each Deployed System

At least annually, and within 90 days of any intentional and substantial modification. Must confirm the system is not causing algorithmic discrimination. Records must be retained for at least three years after final deployment.

§ 6-1-1703(3)(g) and (3)(f) — Signed Act Page 13
04

Consumer Disclosure Before or At Point of Decision

Notify the consumer that a high-risk AI system is being used, state its purpose and nature, provide deployer contact information, and inform the consumer of their right to opt out of profiling where applicable.

§ 6-1-1703(4)(a) — Signed Act Pages 13–14
05

Adverse Decision Explanation

If the consequential decision is adverse to the consumer, provide a statement disclosing the principal reasons, the degree and manner in which the AI contributed, and the type and source of data processed.

§ 6-1-1703(4)(b)(I) — Signed Act Page 14
06

Right to Correct Personal Data

Consumers must be given the opportunity to correct any incorrect personal data the AI system processed in making an adverse consequential decision about them.

§ 6-1-1703(4)(b)(II) — Signed Act Page 14
07

Human Review Appeal Mechanism

Consumers receiving adverse consequential AI decisions must have access to an appeal process. Where technically feasible, this must include human review — unless providing appeal would not be in the consumer's best interest.

§ 6-1-1703(4)(b)(III) — Signed Act Page 14
08

Public Governance Statement on Website

A publicly available statement on your website summarizing the types of high-risk AI systems you deploy, how you manage discrimination risks for each, and the nature, source, and extent of data collected and used.

§ 6-1-1703(5)(a) — Signed Act Page 15
09

Disclose Discovered Discrimination to the AG

If you discover that a high-risk AI system has caused algorithmic discrimination, you must notify the Colorado Attorney General within 90 days of discovery, in the form and manner prescribed by the AG.

§ 6-1-1703(7) — Signed Act Page 17
Small business exemption: Subsections (2), (3), and (5) do not apply to a deployer if: the deployer employs fewer than 50 full-time equivalent employees AND does not use its own data to train the high-risk AI system AND the system is used only for the developer's disclosed intended uses AND continues learning from non-deployer data. All three conditions must be met simultaneously.
§ 6-1-1703(6) — Signed Act Pages 15–16
Affirmative defense: A developer, deployer, or other person has an affirmative defense if they discover and cure a violation through feedback mechanisms, adversarial testing, or internal review — and are otherwise in compliance with NIST AI RMF, ISO/IEC 42001, or another framework designated by the AG.
§ 6-1-1706(3) — Signed Act Pages 23–24
Enforcement

How Enforcement Works

The Colorado Attorney General has exclusive authority to enforce SB 24-205. There is no private right of action — individual consumers cannot sue companies directly under this law.

§ 6-1-1706(1) and (6) — Signed Act Page 23

"The attorney general has exclusive authority to enforce this part 17... This part 17 does not provide the basis for, and is not subject to, a private right of action for violations of this part 17 or any other law."

A violation of SB 24-205 constitutes an unfair trade practice under § 6-1-105(1)(hhhh) of the Colorado Consumer Protection Act. Civil penalties under CRS § 6-1-112(1)(a) reach up to $20,000 per violation, with each affected consumer or transaction constituting a separate violation.

Source: CRS § 6-1-112 — Civil Penalties (Colorado.Public.Law) · CRS 2024 Title 6 PDF (leg.colorado.gov)
No grace period after June 30: The AG is not required to provide advance warning before initiating enforcement. The standard is whether your organization was compliant on the enforcement date — not whether you began working toward compliance afterward.
Related Work — CFVA.ai

The AI Accountability Protocol

CFVA.ai has developed the AI Accountability and Compliance Protocol (AIACP v0.3) — a 16-section governance framework with 60+ requirements for responsible AI deployment. The protocol addresses anti-discrimination requirements, human oversight mechanisms, transparency obligations, and the use of AI in high-stakes contexts.

The AIACP draws on frameworks including Colorado SB 24-205, the EU AI Act, and the NIST AI Risk Management Framework. It is published openly and available for review and comment.

Read the full protocol: cfva.ai/aiacp →

Ongoing AI accountability and governance analysis: cfva.ai/signal (The Signal) →

Questions or Thoughts?

CFVA.ai is a civic intelligence platform. This page is published to make this law accessible and findable. If you have questions, want to discuss AI governance, or have feedback on the AI Accountability Protocol — officials, researchers, journalists, and anyone thinking seriously about this are welcome to reach out.

support@cfaisolutions.com

About CFVA.ai

CFVA.ai is a civic intelligence platform indexing entities across 32 live federal data sources — FTC, CFPB, FDA, OSHA, HHS OIG, SEC, and more. Professional research access available at cfva.ai.

The AI Accountability and Compliance Protocol (AIACP) and The Signal are published as contributions to the broader conversation on AI governance and accountability.