Colorado SB 24-205, the Consumer Protections for Artificial Intelligence Act, is the first comprehensive AI governance law in the United States. If your company uses AI to make decisions affecting Colorado residents in employment, lending, housing, healthcare, or education — this law applies to you. Enforcement begins June 30, 2026.
Colorado Senate Bill 24-205 — formally the Consumer Protections for Artificial Intelligence Act — was signed by Governor Jared Polis on May 17, 2024. It is the first comprehensive state-level AI governance law in the United States.
Source: Colorado General Assembly — SB24-205 Bill PageThe law was originally set to take effect February 1, 2026. A follow-on bill, SB25B-004, extended the enforcement date to June 30, 2026.
Source: Colorado General Assembly — SB25B-004The law applies to any company doing business in Colorado that develops or deploys high-risk AI systems making consequential decisions affecting Colorado residents — regardless of where the company is headquartered.
"On and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination."
The law places obligations on two distinct actors: developers (those who build or substantially modify AI systems) and deployers (those who use AI systems to make consequential decisions). A company can be both simultaneously. Using a third-party AI tool does not transfer your obligations as a deployer to the vendor — you remain responsible.
These definitions come directly from § 6-1-1701 of the signed act. They determine whether and how the law applies to your organization.
Source: SB 24-205 Signed Act — § 6-1-1701 (Pages 1–5)The following scenarios describe how companies operating AI tools today would be in direct violation of SB 24-205 as of June 30, 2026. Each scenario references the specific section of the signed act it violates. All section references link to the official signed law.
A company uses an AI-powered tool to screen resumes and rank job candidates. No impact assessment has ever been completed. There is no documentation of training data, known bias risks, or discrimination mitigation strategies.
Under SB 24-205: Deployers must complete an impact assessment before deployment and annually thereafter. The assessment must document the system's purpose, nature of consequential decisions it influences, known discrimination risks, training data categories, and mitigation strategies. Complete absence of documentation is a violation on day one of enforcement.
A lending platform uses an AI model to make instant approval and denial decisions. Applicants who are denied receive a generic rejection notice with no mention that an AI made the decision and no mechanism to request human review.
Under SB 24-205: Two separate violations. Consumers must be notified before or at the time a high-risk AI system makes a consequential decision about them. Additionally, consumers must be given the opportunity to appeal adverse decisions and request human review where technically feasible.
A property management company deploys an AI tool that scores prospective tenant applications and outputs approval or rejection recommendations. The company has no written AI governance policy, no risk management program, and has never assessed whether the system produces discriminatory outcomes.
Under SB 24-205: Deployers must implement a documented risk management policy and program specifying the principles, processes, and personnel used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. It must be an iterative process that is regularly reviewed and updated. A system screening tenants with zero governance documentation is a direct violation.
A company subscribes to a workforce management SaaS product that uses AI to recommend performance ratings and surface termination candidates. The company assumes the software vendor is responsible for compliance. No internal assessments, disclosures, or governance documentation exist.
Under SB 24-205: The law places compliance obligations on the deployer — the company using the AI — regardless of who built it. Buying a third-party AI tool does not transfer your legal obligations to the vendor. You are responsible for the impact assessments, consumer disclosures, governance policies, and appeal mechanisms.
A company develops and licenses an AI clinical decision support tool used by hospitals. The company's website has no publicly available statement describing the types of high-risk AI systems they offer or how they manage known risks of algorithmic discrimination.
Under SB 24-205: Developers must publish a clear and readily available public statement on their website — or in a public use case inventory — summarizing the types of high-risk systems they make available and how they manage discrimination risks. The statement must be updated within 90 days of any intentional and substantial modification.
An insurance company deployed an AI underwriting model three years ago. An initial risk assessment was completed at launch. The model has since been updated multiple times. No annual review has ever been conducted.
Under SB 24-205: At least annually, deployers must review each high-risk AI system to ensure it is not causing algorithmic discrimination. Additionally, a new impact assessment is required within 90 days of any intentional and substantial modification to the system. A stale initial assessment on an actively updated model does not satisfy the law.
A company deploys a customer service chatbot that handles insurance claim inquiries and eligibility determinations. The bot responds in natural language and is not identified anywhere as an AI system. Consumers interact with it believing they are communicating with a human.
Under SB 24-205: Any person doing business in Colorado that deploys an AI system intended to interact with consumers must ensure each consumer is told they are interacting with an AI system. Disclosure is not required only where it would be obvious to a reasonable person that they are interacting with AI. (§ 6-1-1704(2))
To satisfy SB 24-205 as a deployer of high-risk AI systems, every item below must be in place before June 30, 2026. Each requirement links directly to the relevant section of the signed act.
A documented, implemented policy covering identification, assessment, and mitigation of algorithmic discrimination risks. Must be an iterative process, regularly reviewed and updated over the life cycle of each high-risk AI system.
§ 6-1-1703(2)(a) — Signed Act Pages 9–10Must document the system's purpose, intended use cases, deployment context, known discrimination risks, categories of data processed, performance metrics, transparency measures taken, and post-deployment monitoring plans.
§ 6-1-1703(3)(a)(I) and (3)(b) — Signed Act Pages 11–12At least annually, and within 90 days of any intentional and substantial modification. Must confirm the system is not causing algorithmic discrimination. Records must be retained for at least three years after final deployment.
§ 6-1-1703(3)(g) and (3)(f) — Signed Act Page 13Notify the consumer that a high-risk AI system is being used, state its purpose and nature, provide deployer contact information, and inform the consumer of their right to opt out of profiling where applicable.
§ 6-1-1703(4)(a) — Signed Act Pages 13–14If the consequential decision is adverse to the consumer, provide a statement disclosing the principal reasons, the degree and manner in which the AI contributed, and the type and source of data processed.
§ 6-1-1703(4)(b)(I) — Signed Act Page 14Consumers must be given the opportunity to correct any incorrect personal data the AI system processed in making an adverse consequential decision about them.
§ 6-1-1703(4)(b)(II) — Signed Act Page 14Consumers receiving adverse consequential AI decisions must have access to an appeal process. Where technically feasible, this must include human review — unless providing appeal would not be in the consumer's best interest.
§ 6-1-1703(4)(b)(III) — Signed Act Page 14A publicly available statement on your website summarizing the types of high-risk AI systems you deploy, how you manage discrimination risks for each, and the nature, source, and extent of data collected and used.
§ 6-1-1703(5)(a) — Signed Act Page 15If you discover that a high-risk AI system has caused algorithmic discrimination, you must notify the Colorado Attorney General within 90 days of discovery, in the form and manner prescribed by the AG.
§ 6-1-1703(7) — Signed Act Page 17The Colorado Attorney General has exclusive authority to enforce SB 24-205. There is no private right of action — individual consumers cannot sue companies directly under this law.
"The attorney general has exclusive authority to enforce this part 17... This part 17 does not provide the basis for, and is not subject to, a private right of action for violations of this part 17 or any other law."
A violation of SB 24-205 constitutes an unfair trade practice under § 6-1-105(1)(hhhh) of the Colorado Consumer Protection Act. Civil penalties under CRS § 6-1-112(1)(a) reach up to $20,000 per violation, with each affected consumer or transaction constituting a separate violation.
Source: CRS § 6-1-112 — Civil Penalties (Colorado.Public.Law) · CRS 2024 Title 6 PDF (leg.colorado.gov)CFVA.ai has developed the AI Accountability and Compliance Protocol (AIACP v0.3) — a 16-section governance framework with 60+ requirements for responsible AI deployment. The protocol addresses anti-discrimination requirements, human oversight mechanisms, transparency obligations, and the use of AI in high-stakes contexts.
The AIACP draws on frameworks including Colorado SB 24-205, the EU AI Act, and the NIST AI Risk Management Framework. It is published openly and available for review and comment.
Read the full protocol: cfva.ai/aiacp →
Ongoing AI accountability and governance analysis: cfva.ai/signal (The Signal) →
CFVA.ai is a civic intelligence platform. This page is published to make this law accessible and findable. If you have questions, want to discuss AI governance, or have feedback on the AI Accountability Protocol — officials, researchers, journalists, and anyone thinking seriously about this are welcome to reach out.
CFVA.ai is a civic intelligence platform indexing entities across 32 live federal data sources — FTC, CFPB, FDA, OSHA, HHS OIG, SEC, and more. Professional research access available at cfva.ai.
The AI Accountability and Compliance Protocol (AIACP) and The Signal are published as contributions to the broader conversation on AI governance and accountability.