New York’s Responsible AI Safety and Education Act (commonly referred to as the “RAISE Act”) represents a consequential shift in U.S. artificial intelligence governance: rather than attempting to ban categories of AI or require pre-approval for model releases, New York has chosen a transparency-and-accountability framework aimed squarely at the most powerful “frontier” AI developers. In practice, the RAISE Act treats AI safety as an operational governance issue—documentation, incident reporting, oversight, and auditable protocols—while preserving developer flexibility on the technical means used to achieve risk reduction.
The law is notable not only for what it mandates, but for what it signals: two of the country’s most influential technology states (New York and California) are now converging around a baseline “trust but verify” model for frontier AI. That convergence reduces the likelihood that large developers can treat state AI compliance as an afterthought and increases pressure for federal policymakers to either harmonize emerging standards or accept a de facto multi-state regime.
What the RAISE Act Is Trying to Solve
Frontier models are increasingly capable general-purpose systems that can be adapted for a wide range of high-impact tasks. Supporters of the RAISE Act argue that the combination of broad capability, rapid iteration cycles, and the economic incentives to ship quickly can produce systemic risk: safety incidents can scale faster than traditional product oversight mechanisms can respond.
Instead of defining a long list of prohibited uses, the RAISE Act focuses on the “developer layer.” The premise is that meaningful risk reduction begins upstream—by requiring the organizations building the most capable models to maintain structured safety programs, disclose their approach publicly, and report serious incidents rapidly to the state.
Who Is Covered: A Frontier-Developer Focus With a High Revenue Threshold
The RAISE Act is designed to apply to large, well-resourced AI developers, rather than startups experimenting with narrowly-scoped models. The law’s coverage approach is intentionally selective to avoid creating a compliance tax that suppresses smaller innovators. The competing policy bet is that the bulk of systemic risk sits with the most capable general-purpose systems and the entities with the resources to train and deploy them at scale.
By concentrating obligations on major developers, New York is aligning itself with a regulatory pattern that is emerging globally: set governance expectations where capabilities and impact are highest; then expand coverage later only if evidence supports it.
Core Compliance Obligations Under the RAISE Act
Although public discussion often reduces the statute to “transparency,” the RAISE Act is better understood as a governance law. It establishes requirements that push AI developers toward a repeatable internal safety lifecycle—one that can be examined, audited, and compared across firms.
In practical terms, covered developers should anticipate obligations in four operational lanes:
- Public safety governance disclosures: maintaining and publishing an explanation of safety protocols and risk management approaches for frontier models.
- Incident response and reporting: reporting qualifying AI safety incidents to the state within a short timeframe (the RAISE Act references a 72-hour reporting window tied to the determination that an incident occurred).
- Oversight engagement: interaction with a designated state oversight function tasked with assessing covered developers and issuing periodic reporting.
- Integrity of submissions: exposure to enforcement for failures to submit required information or for making false statements in required reporting.
The statute’s structure is intentionally not prescriptive about the “how.” New York is not mandating a particular technical testing framework, evaluation suite, or model-card template. Instead, it is compelling developers to adopt, document, and stand behind a safety program that can be scrutinized.
Timing and Enforcement Posture: Why 2027 Matters
The RAISE Act’s effective date (January 1, 2027) is not a footnote. It is a deliberate policy lever that accomplishes two goals at once: (1) it gives developers time to build robust compliance programs rather than relying on “paper compliance,” and (2) it gives the state time to stand up a credible oversight capability that can evaluate frontier model governance without improvising in real time.
Enforcement is structured around civil actions and a tiered penalty model, with lower maximum fines than earlier drafts. That reduction should not be misread as weak oversight. Smaller maximum penalties can reduce litigation risk and increase survivability of the statute while still creating strong incentives for large developers that are sensitive to reputational exposure and regulatory escalation.
What the RAISE Act Does Not Do (By Design)
Understanding the RAISE Act requires attention to what it avoids. These omissions are a policy choice: New York appears to be building an informational foundation before pursuing heavier intervention.
- It does not create a licensing or pre-approval regime for frontier model deployment.
- It does not require source-code disclosure to the public.
- It does not impose a state-mandated “kill switch” or shutdown mechanism as a universal technical requirement.
- It does not comprehensively regulate downstream users (deployers) across every sector; it concentrates on the developer layer.
- It does not attempt to resolve every AI policy domain (employment decisions, education, housing, insurance, deepfakes) in a single statute.
This is a pragmatic choice. For frontier models, governance obligations and incident reporting are often viewed as the most defensible “first layer” of regulation: they generate data for future policymaking while remaining adaptable to fast-changing technical realities.
How New York Compares to Other State AI Laws
State AI regulation in the U.S. is rapidly fragmenting into two broad categories:
(A) Frontier model safety and transparency laws focused on the builders of the most capable general-purpose systems (New York and California are leading examples).
(B) Consumer protection and anti-discrimination laws focused on “high-risk” use cases and deployers making consequential decisions (Colorado is the leading example).
Utah’s approach sits somewhat apart: it is narrower and disclosure-oriented, emphasizing transparency when consumers interact with generative AI in regulated contexts.
RAISE Act vs. Other Major State AI Acts
| State | Law / Approach | Primary Scope | Who Is Covered | Core Obligations | Enforcement Model | Effective Timing (General) |
|---|---|---|---|---|---|---|
| New York | RAISE Act (frontier AI safety & transparency governance) | Frontier model developer governance, transparency, incident reporting, oversight | Large frontier model developers (high revenue threshold; designed for major players) | Publish safety protocols / frameworks; report qualifying incidents within 72 hours of determining an incident occurred; cooperate with oversight; avoid false statements in required submissions | Civil enforcement (including Attorney General actions); tiered civil penalties | January 1, 2027 |
| California | SB 53 (Transparency in Frontier AI Act) (frontier transparency + reporting channel) | Frontier model transparency reports; frameworks for catastrophic-risk governance; incident reporting mechanisms | Frontier developers (with additional duties for “large” frontier developers; revenue threshold concept similar to NY) | Transparency reporting before deploying or substantially modifying a frontier model; maintain documented protocols for catastrophic-risk management (for large developers); reporting pathways for critical incidents; whistleblower protections | Civil enforcement by state; penalties for noncompliance; agency update / recommendation loop | Phased / statutory effective dates (developers should plan for multi-year implementation) |
| Colorado | SB 24-205 (Colorado AI Act) (high-risk AI consumer protection / anti-discrimination) | High-risk AI systems used in consequential decisions; algorithmic discrimination risk management | Developers and deployers of “high-risk” AI (especially in employment, housing, lending, insurance, education, etc.) | Duty of reasonable care; risk management policies; documentation; notices/disclosures in certain contexts; consumer rights processes; governance controls to mitigate discrimination | State enforcement via consumer protection frameworks; compliance safe harbors / rebuttable presumptions under specified conditions | Mid-2026 (commonly discussed as late June 2026 after implementation delays) |
| Utah | Utah AI Policy Act / related amendments (targeted disclosure in consumer interactions) | Disclosure when consumers interact with generative AI in certain regulated services or when asked | Entities using generative AI in covered consumer-facing contexts (including regulated occupations/services) | Clear disclosure obligations (often triggered by consumer inquiry or specific context); limits on “blame the AI” defenses in consumer protection matters | Enforcement through consumer protection authority and professional regulation frameworks | Effective 2024 with subsequent amendments/clarifications |
Compliance Team Requirements for RAISE Act
- If you develop frontier models: your compliance program must be operational, not cosmetic—documented governance, repeatable evaluation procedures, incident triage, and evidence retention.
- If you deploy “high-risk” AI for decisions about people: Colorado-style obligations may be the bigger near-term driver—anti-discrimination risk management, consumer notices, and process controls.
- If you provide consumer-facing AI interactions: Utah-style disclosure triggers can create front-end UI and scripting requirements that are easy to overlook but simple to implement.
- Expect convergence: New York and California alignment increases the odds that additional states copy this frontier-model template rather than inventing entirely new approaches.
- Reputational exposure is a hidden enforcement tool: published safety plans and incident disclosures create market pressure even without maximal statutory fines.
AI Compliance Roadmap for Frontier Developers
- Define the governance perimeter: identify which models qualify as frontier under applicable definitions and whether your corporate structure triggers “large developer” status.
- Stand up a safety lifecycle: formalize pre-release evaluation gates, post-release monitoring, and escalation criteria for “critical” incidents.
- Write the public-facing framework: publish a coherent, defensible explanation of your safety protocols and risk-management approach (and ensure internal practices match the description).
- Build a 72-hour incident motion: implement internal detection, legal review, and reporting workflows that can reliably meet short reporting timelines without guesswork.
- Harden record retention: preserve the evidence required to demonstrate compliance (testing outputs, approvals, red-team findings, incident logs, and remediation actions).
- Prepare for oversight engagement: treat the state AI office as an ongoing stakeholder; assume periodic inquiries and expectations of consistency over time.
Why New York’s Approach Is Politically Durable
The RAISE Act is built around a regulatory strategy that tends to endure: transparency, reporting, and governance requirements are generally more defensible than bans or pre-approval regimes, particularly where technical definitions evolve quickly. The law also avoids forcing the state to adjudicate deep technical questions (for example, specifying a required evaluation benchmark) before regulators have accumulated institutional expertise.
Additionally, by focusing on the largest developers, the law aligns accountability with capability. That helps lawmakers justify why the compliance burden is imposed where it is most feasible and most impactful, and it reduces the likelihood that the statute is perceived as punishing smaller innovation.
The Strategic Significance of New York–California Alignment
Perhaps the most important aspect of the RAISE Act is its role in shaping a “two-state baseline.” When New York and California converge on a similar compliance architecture, companies tend to standardize internal policy rather than create state-by-state variants. Over time, those standardized practices can begin to function like a national standard—especially if additional states adopt comparable rules.
That dynamic has a second-order effect: it raises the stakes for federal inaction. As the state baseline solidifies, Congress faces a choice between harmonizing (and potentially preempting) the patchwork or allowing a multi-state regime to become the practical national rule by default.
A Governance Law That Sets the Stage for What Comes Next
The RAISE Act is not an all-purpose AI statute, nor does it claim to be. It is a focused governance framework aimed at the developers best positioned to prevent or mitigate frontier-model harm. Its defining move is to institutionalize safety as a documented program with public accountability and rapid incident reporting, while leaving room for technical evolution. When compared to Colorado’s high-risk anti-discrimination approach and Utah’s targeted disclosure model, the RAISE Act sits clearly in the “frontier safety and transparency” lane—a lane increasingly likely to become the template for other large states.
For AI developers, the practical message is straightforward: the era of informal safety practices is ending. Frontier AI compliance is becoming a board-level governance problem, and New York has now joined California in making that expectation explicit.
If you need help with AI governance and help staying compliant book a demo below with one of our privacy experts.