Colorado Hits Reset on AI Regulation With a New AI Act : What Developers and Deployers Need to Know

15 May 2026
Client Alert

In a last-minute overhaul of Colorado’s AI regulatory framework, Governor Jared Polis signed SB 189, “Automated Decision-Making Technology,” into law on May 14, 2026, repealing and replacing the Colorado AI Act mere weeks before the original statute’s June 30, 2026 effective date. The new law adopts a lighter-touch regulatory regime that eliminates certain obligations for developers and deployers of AI systems. As a result, the 2024 Colorado AI Act will not take effect. The new law takes effect on January 1, 2027.

While more streamlined in its compliance obligations, SB 189 retains broad applicability across the use of AI in consequential decision‑making and will continue to shape how AI is regulated at the state level. SB 189 eliminates several of the Colorado AI Act’s key requirements for “high‑risk AI systems,” including risk assessment programs, impact assessments, and the duty to use reasonable care to prevent algorithmic discrimination. SB 189 replaces that framework with a disclosure‑based approach focused on “automated decision‑making technology” (ADMT) used to materially influence consequential decisions, emphasizing consumer notices, post‑adverse outcome disclosures, and limited consumer rights.

This alert highlights key differences between the repealed Colorado AI Act and SB 189, examines how SB 189 fits within the broader state AI regulatory landscape, and provides recommendations for developers and deployers ahead of enforcement.

Key Definitions Under the New Law

Adverse Outcome: A decision that denies, revokes, or materially reduces a consumer’s access to, eligibility for, or the provision of an opportunity or service; or results in materially less favorable terms that are reasonably likely to materially limit, delay, or effectively deny, or otherwise fundamentally alter, a consumer’s access to, eligibility for, or the provision of an opportunity or service compared to terms offered to similarly situated consumers.

Automated Decision‑Making Technology (ADMT): A technology that processes personal data and uses computation to generate output that is used to make or assist a decision or determination concerning an individual. ADMT excludes: (1) common infrastructure and security tools, (2) tools used solely to summarize or present information for human review, and (3) consumer‑facing systems that provide information or recommendations but are not intended or permitted to be used in consequential decisions.

Consequential Decision: A decision about a consumer that relates to the provision of or a consumer’s access to or eligibility for a “covered domain”; or a decision about a consumer that relates to a differentiated price, cost sharing, compensation, or other material terms that is reasonably likely to materially limit, effectively deny, or otherwise fundamentally alter the consumer’s eligibility for a “covered domain.” Covered domains include education, employment, financial/lending services, essential government services/public benefits, healthcare services, residential real estate, and insurance.

Consumer: An employee, a job applicant who is a Colorado resident, and any individual whose access to, eligibility for, or opportunity in Colorado is evaluated in a consequential decision by a person doing business in Colorado.

Covered ADMT: ADMT that is used to “materially influence” a consequential decision. “Materially influence” means the ADMT output is a non‑de minimis factor that is used in making or affecting the outcome of a consequential decision, including by ranking, scoring, recommending, or otherwise meaningfully altering how a consequential decision is made.

Colorado AI Act vs. SB 189

SB 189 meaningfully restructures Colorado’s approach to AI regulation, shifting from a comprehensive governance framework to a disclosure‑driven regime focused on consequential decision‑making. The table below summarizes some of these differences:

Liability and Fault Allocation

SB 189 clarifies how liability may be assigned when ADMT is involved in unlawful discrimination claims under existing law. Both developers and deployers may be held liable where a covered ADMT materially influences a consequential decision, with fault allocated based on each party’s relative responsibility.

The law limits developer liability to circumstances where the system is used as intended and materially contributes to the alleged violation, while making clear that developers are not liable for downstream uses that fall outside intended or documented uses. Deployers remain responsible for their independent actions, including misuse of a covered ADMT.

The law does not create joint and several liability beyond what existing law provides and does not alter underlying legal standards. However, contractual provisions that seek to indemnify a party for its own discriminatory conduct in connection with ADMT use are deemed void as against public policy.

The law reinforces that the use of ADMT does not diminish existing obligations under anti‑discrimination or consumer protection laws and that compliance with SB 189 does not provide a defense to liability under other applicable law.

Colorado’s Role in the Evolving State AI Landscape

Despite the shift to a more streamlined framework, Colorado’s approach to AI regulation remains distinct among U.S. states. Most state AI laws continue to take a sector‑specific or use‑case‑specific approach, targeting areas such as employment, healthcare, or consumer interactions with chatbots. States such as California, Illinois, Texas, New York, and Utah have moved quickly to adopt AI legislation, but their laws generally focus on narrower categories of AI use or transparency obligations rather than imposing a comprehensive AI governance regime. Colorado’s regulatory approach, in both its original and its revised forms, applies across multiple domains of consequential decision‑making and governs both developers and deployers.

Whether the Colorado approach will become the prevailing model for broader state AI regulation remains to be seen.

Comparing Colorado SB 189 and CCPA ADMT Regulations

Colorado SB 189 resembles the notice and choice framework set out in the recently finalized California Consumer Privacy Act (CCPA) ADMT regulations. Both frameworks require upfront, conspicuous disclosure to consumers before or at the point of interaction with ADMT. Both frameworks pair initial notice with consumer rights, but they diverge in structure: SB 189 provides post‑adverse outcome rights, allowing consumers to seek explanations, correct data, and potentially obtain human review after a consequential decision, whereas the CCPA establishes proactive rights, including the ability to opt out of ADMT use and request information about its use regardless of outcome.

Next Steps

SB 189 contemplates additional rulemaking that will be critical to interpreting and implementing several of its core provisions. In particular, the attorney general is directed to adopt rules prior to January 1, 2027, to clarify the content and format of post‑adverse outcome disclosures, and to implement the law’s consumer rights provisions, including access, correction, and meaningful human review.

The law also grants the attorney general authority to issue guidance on key definitions, including the meaning of “materially influence,” which will play a central role in determining the law’s scope.

These forthcoming rules are expected to provide key details on how the law will operate in practice, including sector‑specific expectations and the interaction with existing state and federal requirements.

Takeaways

Developers and deployers of ADMT systems should consider implementing the following steps to prepare for compliance:

  • Identify covered use cases: Map where ADMT is used to materially influence consequential decisions across the business.
  • Update documentation practices: Ensure clear, consistent documentation of system purpose, data categories, limitations, and appropriate use.
  • Implement notice and disclosure workflows: Build processes to provide point‑of‑interaction notice and timely post‑adverse outcome disclosures.
  • Operationalize consumer rights: Establish procedures to handle data access and correction requests, and provide meaningful human review and reconsideration.
  • Update contracts: Review indemnification provisions in contracts between developers and deployers to confirm they will not be void as against public policy if they purport to indemnify a party from liability for its own use of ADMT in making consequential discriminatory decisions.
  • Strengthen recordkeeping: Maintain records sufficient to demonstrate compliance, meeting the 3‑year retention requirement.

Maya Vishwanath, an AI Analyst at Morrison Foerster, contributed to this alert.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.