State Sen. Robert Rodriguez did not exactly celebrate when he introduced Senate Bill 189. “I’m not happy with everything in the bill,” he told observers of Colorado’s latest attempt to reshape its landmark AI Act. It was, as political admissions go, a remarkably honest one — and it may be the most accurate summary available of where American AI governance currently stands. Nobody is happy. Nobody is getting what they want. And the question worth asking is whether that unhappiness is a sign that the process is working, or a sign that the process is broken.
Colorado’s AI Act was itself a landmark. When Governor Jared Polis signed it into law in May 2024, Colorado became the first U.S. state to pass comprehensive AI legislation targeting high-risk algorithmic systems. The law borrowed conceptually from the EU AI Act — a risk-tiered framework focused on consequential decisions affecting individuals in domains like employment, housing, credit, and healthcare. It created obligations for developers and deployers of high-risk AI systems, required adverse action disclosures, and established a right for affected individuals to appeal automated decisions. The AI policy community’s reaction was swift and divided: advocates saw a meaningful floor of protection for people increasingly subject to algorithmic decision-making; the technology industry saw compliance nightmares and a warning shot that could chill AI development in the United States.
The governor himself was ambivalent. Polis signed the bill while simultaneously writing a letter urging the legislature to revise it before it ever took effect. That tension — enacting a law while calling for its amendment — set the stage for the prolonged negotiating exercise that has followed, of which Senate Bill 189 is only the latest chapter.
What the New Proposal Actually Does
Senate Bill 189 does two things that matter. First, it pushes the law’s effective date from June 2025 to January 2027, buying the legislature time to continue working through a framework that its own author acknowledges remains unsettled. Second, and more substantively, it removes requirements around disclosing how a given AI system generates a significant decision in employment, lending, and other covered domains.
That second change deserves scrutiny, because the explainability requirements it removes were not incidental to the law’s original purpose — they were central to it. The animating concern behind Colorado’s AI Act, and behind AI regulation generally, is the problem of consequential opacity: people whose job applications are rejected, whose loan requests are denied, whose insurance is priced at a premium, are increasingly subject to decisions made by algorithmic systems they cannot see, cannot interrogate, and cannot meaningfully contest. Explainability requirements are one of the few tools regulators have to address that opacity. They force organizations deploying AI to be able to articulate, at least in general terms, what factors drove a given outcome.
The industry argument against these requirements is not trivial. Genuine technical explainability for modern machine learning systems — particularly large neural networks — is genuinely hard. The field of explainable AI is active and contested. Requiring organizations to disclose how a system works, when the system itself operates in ways that resist clean human narration, risks producing disclosures that are technically compliant but practically meaningless. Worse, critics argue, it could create legal liability for organizations that provide explanations that are later characterized as incomplete or misleading, even where they made good-faith efforts.
These are real tensions. But removing the requirements entirely, rather than calibrating them more carefully, is a significant retreat from the law’s original ambition. The affected individuals whose recourse depended on understanding why a decision was made against them do not benefit from a more industry-friendly explainability standard. They benefit from no explainability requirement at all only in the narrow sense that they cannot be misled by a bad explanation — which is cold comfort when the alternative is no explanation at all.
The Delay as a Negotiating Strategy
The effective date extension is worth examining separately from the substantive changes, because it plays a distinct role in the political dynamics around the law.
Deadlines matter in legislative negotiations. The original June 2025 effective date created urgency — a hard stop by which organizations operating in Colorado needed to have compliance programs in place. That urgency was a source of leverage for the law’s supporters. The closer the deadline, the more pressure on industry to engage constructively with implementation rather than continue lobbying for wholesale revision. Moving the date to January 2027 releases that pressure. It gives opponents of the law nearly two additional years to push for further amendments, to argue at the federal level for preemption, or simply to run out the political clock in hopes that a changed legislature or governor’s office will ultimately water the law down further.
There is a counterargument. The complexity of AI compliance is real, and organizations genuinely need time to understand what is required of them, build the processes to comply, and test whether those processes work. A law that takes effect before anyone knows how to comply with it produces compliance theater rather than genuine accountability. From that perspective, a delay that comes with continued legislative engagement — rather than simply a longer runway for the same fight — could be constructive.
The difficulty is distinguishing between those two scenarios from the outside. A delay looks the same whether it is a good-faith acknowledgment that the law needs more work or a tactical maneuver to dilute it. What matters is what happens in the interim: whether the legislature actually uses the additional time to strengthen the framework, or whether it uses it to accommodate industry preferences that would not survive the scrutiny of a hard deadline.
Colorado’s Unenviable Position
It is important to have some sympathy for the position Colorado finds itself in. The state did something genuinely difficult: it enacted comprehensive AI legislation in the absence of federal action, in the absence of clear technical standards, and in the face of intense opposition from an industry whose lobbying resources vastly outstrip those of the advocates on the other side. The political coalition that passed the original law was fragile, and the law it produced was, inevitably, imperfect.
The criticism that Colorado moved too fast is easy to make. But the alternative — waiting for federal consensus that has not arrived and shows few signs of arriving — means waiting indefinitely while algorithmic decision-making continues to expand into every consequential corner of American life. The people most exposed to high-risk AI systems are not primarily tech workers in San Francisco or policy professionals in Washington. They are job applicants screened by automated hiring tools, tenants evaluated by landlord scoring systems, borrowers assessed by credit algorithms, and people seeking public benefits whose applications are processed by automated fraud detection. For those people, “wait for the perfect law” is not a neutral position.
The EU AI Act, which provided Colorado’s conceptual framework, took years to negotiate and involved dozens of member states, thousands of stakeholders, and an enormous institutional apparatus. It is still being implemented. Expecting a single U.S. state legislature to produce a technically perfect, commercially workable, individually protective AI governance framework in one session — and then to stick with it — was probably never realistic. What Colorado has produced is a living negotiation, and that negotiation, for all its messiness, may be more valuable than the alternative.
The National Implications
Colorado’s struggle is not merely a Colorado story. More than a dozen states have introduced AI legislation in recent sessions, and nearly all of them are watching what happens in Denver. The shape of Colorado’s eventual framework — what it requires, what it exempts, how it handles explainability, what enforcement looks like — will influence what is possible in Austin and Albany and Sacramento and Annapolis.
The federal dimension looms over all of this. The current administration’s posture on AI regulation has been broadly permissive — focused on removing what it characterizes as barriers to American AI leadership rather than establishing accountability frameworks. Federal AI preemption — the possibility that Congress passes legislation that explicitly displaces state AI laws — has emerged as a genuine threat to state-level efforts. If that happens, the years Colorado spent negotiating its AI Act could be legislatively erased by a federal baseline that offers considerably weaker protections.
That possibility creates its own political dynamics. For industry advocates, federal preemption is an attractive endgame: replace a patchwork of state laws with a single national standard that is easier to comply with and, not coincidentally, likely to be more accommodating than the more aggressive state frameworks. For consumer and civil rights advocates, it is a nightmare scenario in which the most protective state laws are swept away in favor of a lowest-common-denominator federal floor. For state legislators in the middle, like Rodriguez, it creates a dilemma: negotiate hard and risk enacting something stringent that is later preempted, or moderate the law in hopes of producing something that survives a federal override — while potentially giving up the protections that justified the effort in the first place.
The Harder Question
Beneath the politics of Senate Bill 189 is a harder question that AI governance advocates have not yet answered satisfactorily: what does a good AI accountability law actually look like, given the technical realities of modern AI systems?
The honest answer is that we do not fully know. The EU’s experience is instructive but not directly transferable. The technical literature on algorithmic accountability is advancing but not settled. The empirical evidence on how different regulatory approaches affect both AI deployment and individual outcomes is thin. Regulators, legislators, and advocates are all working with incomplete information about a technology that is itself changing faster than the policy process can track.
This uncertainty does not mean that regulation is premature or that the Colorado effort is misguided. It means that regulatory humility — the willingness to acknowledge what is not known, to build in review mechanisms, to treat implementation as a learning process rather than a one-time enactment — is not a weakness. It is appropriate.
What it also means is that the negotiations Rodriguez described, in which no one is fully satisfied and everyone is compromising, may actually be the right process even when it produces imperfect outputs. The alternative — waiting until everyone agrees, or until the perfect framework emerges from academic consensus — is a luxury that the people subject to high-risk AI systems right now cannot afford to extend indefinitely.
Rodriguez said the unhappiness of all parties was an important place to start. He may be right about the dynamic, even if the specific compromises in Senate Bill 189 are debatable. The place to argue is not about whether Colorado should be doing this — it should — but about whether the specific trades being made serve the people the law was designed to protect. On explainability, that argument has not yet been won by those who would remove the requirement. On the delay, the test will be whether the legislature uses the time to strengthen accountability or to quietly abandon it.
Colorado made history by going first. What it does with that history is still being written.