In the high desert of Utah, where innovation often meets pragmatism, a new law has quietly taken root, marking a significant step in America’s patchwork approach to regulating artificial intelligence. Signed into law on March 13, 2024, by Governor Spencer Cox, the Utah AI Policy Act (SB 149) positions the state as a pioneer in addressing the private-sector use of generative AI. Those systems that churn out human like text, images, or audio with little oversight and has created immense opportunities for firms like Captain Compliance to provide AI governance. Unlike the sweeping ambitions of European regulators with the EU-AI Act and GDPR or the privacy laws in California, Utah’s approach is narrower, more grounded, aiming to balance consumer protection with a light-touch encouragement of technological progress. So what does this mean for businesses, residents, and the broader national conversation about AI?
What is the Utah AI Policy Act?
The Utah AI Policy Act is not a sprawling manifesto on AI governance. Instead, it’s a targeted piece of legislation that amends existing consumer protection laws to account for the rise of generative AI—systems trained on vast datasets to produce outputs that mimic human creativity. Think of the chatbot that books your doctor’s appointment or the voice assistant that schedules your day. Utah’s lawmakers, led by State Senator Kirk Cullimore, sought to ensure transparency and accountability without stifling the state’s burgeoning tech scene. The act, which took effect on May 1, 2024, is less about controlling AI’s development and more about managing how it interacts with people in everyday commerce.
At its core, the law demands disclosure. If a business uses generative AI to engage with consumers—say, a retailer deploying a chatbot to handle returns—it must clearly reveal that fact when asked. For certain regulated professions, like doctors or accountants, the disclosure must be proactive, upfront, and unmistakable. The act also establishes an Office of Artificial Intelligence Policy to study AI’s impacts and foster innovation through a “learning laboratory” program, a kind of regulatory sandbox where companies can test AI systems with reduced penalties.
Has the AI Act Been Passed In Utah?
Yes, the Utah AI Policy Act is no longer a proposal it’s law as of today. After unanimous passage in the Utah Legislature, Governor Cox signed SB 149 on March 13, 2024, just as the European Parliament was adopting its own AI Act across the Atlantic. Utah’s law went into effect less than two months later, on May 1, 2024, making it the first comprehensive state level AI regulation in the U.S. to focus on private-sector usage following in the footsteps of Utah being one of the first states to create a Privacy Law for Utah residents right after California and during the same time as Virginia, Colorado, and Connecticut decided to all take privacy risks seriously for their residents. While other states like California have tackled bots or deepfakes in narrower contexts, Utah’s move stands out for its breadth and speeda rare instance of a state government outpacing the federal void on AI policy.
What Are the Main Points of the Utah AI Act?
The Utah AI Policy Act is built on a few key pillars, each reflecting a cautious optimism about AI’s role in society. Here’s a breakdown, because sometimes clarity demands simplicity:
- Disclosure Requirements: Businesses must disclose generative AI use if prompted by a consumer, while regulated professionals (e.g., healthcare providers) must do so proactively, verbally, or electronically before interactions begin. Remember that call you got the other day? Did it sound a little robotic like an AI voice bot calling you? Now that needs to be disclosed when asked.
- Consumer Protection Integration: The act ties AI usage to Utah’s existing consumer protection framework, meaning companies can’t dodge liability by blaming their AI tools for deceptive practices and dark patterns which is a giant no-no in privacy frameworks.
- Enforcement: The Utah Division of Consumer Protection can fine violators up to $2,500 per incident, with courts able to add injunctions, disgorgement, or further penalties up to $5,000 for repeat offenders.
- Innovation Support: The Office of Artificial Intelligence Policy oversees a “learning lab” to study AI’s risks and benefits, offering companies regulatory relief (like reduced fines) while they experiment.
- Synthetic Data Exemption: Data generated by AI that doesn’t contain personal information like synthetic datasets falls outside Utah’s privacy law definitions, easing compliance for developers.
These points weave a narrative of transparency and accountability, but they also hint at Utah’s reluctance to over regulate as they are taking in a lot of residents from California who claim to have left because of over regulation. It’s a law that says, “Tell us what you’re doing, and we’ll let you figure out the rest within reason.”
What Does the AI Act Apply To?
The act casts a wide net over generative AI, defined as systems that are trained on data, interact via text, audio, or visuals, and produce unscripted outputs with minimal human oversight. This covers chatbots, voice assistants, and even AI-generated marketing content. It applies to two main groups: businesses under the AI Utah Division of Consumer Protection’s purview (think telemarketers or online retailers) and “regulated occupations” requiring state licenses—doctors, architects, social workers, and more. If you’re a Utah therapist using an AI scheduler or a store deploying a virtual artificial intelligence assistant, this law has your number.
Utah AI Policy Act Summary
In essence, the Utah AI Policy Act is a consumer first law with a pro innovation twist. It mandates transparency disclose your AI or face the consequences while holding companies accountable for what their tools say or do. It doesn’t ban AI or dictate its inner workings; instead, it extends Utah’s consumer protection ethos into the digital age. The creation of the Office of Artificial Intelligence Policy signals a longer game: Utah wants to learn from AI as it grows, not just react to it. For residents, it’s a promise of clarity in an AI saturated world. For businesses, it’s a gentle nudge to play fair.
Utah AI Policy Act Text
The full text of SB 149, available on Utah’s legislative website, runs dozens of pages, but its heart lies in a few key sections. Under Section 13-2-12, it states: “A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person… shall clearly and conspicuously disclose… that the person is interacting with generative artificial intelligence and not a human, if asked or prompted.” For regulated professions, Section 13-63-201 ups the ante: disclosures must be “prominent” and preemptive. Penalties are outlined in Section 13-2-6, empowering the Division of Consumer Protection to act decisively. It’s dry legalese, but it paints a vivid picture of Utah’s priorities.
Artificial Intelligence Policy Act
Officially titled the Artificial Intelligence Policy Act within SB 149, this segment establishes the Office of Artificial Intelligence Policy a small but ambitious entity tasked with shaping Utah’s AI future. The office doesn’t just enforce; it experiments, inviting companies to join its learning lab for up to two years of regulatory leniency. Imagine a startup testing an AI customer service tool: it might face lighter fines if something goes awry, provided it shares insights with the state. It’s a pragmatic compromise, reflecting Utah’s desire to stay competitive in tech without drowning in red tape.
Colorado Artificial Intelligence Act vs. Utah AI Law
Colorado’s SB 24-205, signed into law in May 2024, takes a bolder swing at AI regulation, focusing on “high-risk” systems that make consequential decisions—like hiring or lending. Unlike Utah’s disclosure-centric approach, Colorado mandates risk assessments and mitigation for AI that could discriminate, tying it closely to privacy laws like the Colorado Privacy Act. Utah’s law, by contrast, is narrower, emphasizing consumer-facing transparency over systemic oversight. Where Colorado seeks to prevent harm, Utah aims to inform and adapt.
Utah SB 149
SB 149 is the legislative vehicle for Utah’s AI ambitions, a bill that sailed through the statehouse with bipartisan support. Sponsored by Senator Cullimore, it reflects a conservative state’s cautious embrace of tech regulation less about control, more about clarity. Its passage in early 2024 marked Utah as a trailblazer, even if its scope feels modest next to Colorado’s or California’s efforts.
California AI Transparency Act vs. Utah AI Law
California’s AI Transparency Act (AB 2013), still pending as of March 2025, zeroes in on training data, requiring companies to disclose how their AI models are built. It’s a privacy driven approach, rooted in California’s robust data protection laws like the CCPA. Utah’s law sidesteps this, focusing instead on end-user interactions. If California wants to peek under the hood, Utah just wants to know who’s driving and only if you ask.
Utah 13-2-1
Utah Code Section 13-2-1 is the backbone of the state’s consumer protection regime, administered by the Division of Consumer Protection. SB 149 amends it to explicitly include generative AI, ensuring that AI-driven deception say, a chatbot misrepresenting a product falls under the same scrutiny as human fraud. It’s a seamless integration, leveraging existing law to meet new challenges.
Illinois AI Law vs. Utah AI Law
Illinois lacks a comprehensive AI law as of now, but its Biometric Information Privacy Act (BIPA) offers a privacy-focused lens on tech regulation. BIPA, with its strict consent rules for biometric data, contrasts sharply with Utah’s lighter touch. Where Illinois punishes misuse of personal data with hefty fines, Utah opts for disclosure and dialogue a softer, less punitive path. BIPA has already seen some drastic changes including a change from each time an employee clocked in using biometrics to just a single claim as this was going to bankrupt Illinois based businesses.
Comparing AI Laws: A Visual Snapshot
Utah’s AI Policy Act may not rewrite the rulebook, but it’s a quiet milestone a state saying, “We see AI, and we’re ready to live with it.” As other states watch and learn, Utah’s experiment could shape the messy, evolving landscape of AI governance in America just as state privacy laws have exploded in popularity and have necessitated cookie compliance and cookie consent banners on websites and updated privacy notices expect the same for AI regulation in due time.
Comparing AI Laws Between Utah, California, Colorado, and Illinois
To understand how Utah’s AI Policy Act stacks up against other states, consider this comparison of regulatory scope across Utah, Colorado, California (proposed), and Illinois. Each state’s approach reflects different priorities—from transparency to risk management—while intersecting with privacy concerns in unique ways.
State | Regulatory Focus | Privacy Linkage |
---|---|---|
Utah | Disclosure & Innovation Modest scope emphasizing transparency in consumer interactions and fostering AI experimentation. |
Links to consumer protection, not privacy laws directly; exempts synthetic data from personal data definitions. |
Colorado | Risk Management Broader, more aggressive focus on preventing harm from high-risk AI systems. |
Aligns with Colorado Privacy Act, emphasizing fairness in AI decision-making. |
California (Proposed) | Training Data Transparency Wide-reaching but focused on data origins and privacy protections. |
Builds on CCPA, prioritizing transparency in data use for AI models. |
Illinois | Biometric Privacy Narrow, potent focus on specific AI uses like biometric data. |
Relies on BIPA, a privacy law with indirect AI implications. |