California’s SB 53, the Transparency in Frontier Artificial Intelligence Act, and New York’s Responsible AI Safety and Education Act, better known as the RAISE Act, represent something different: the first serious state-law framework aimed directly at the developers of frontier AI models.
These laws are not broad consumer AI statutes like the Colorado AI Act. They are not ordinary privacy laws. They are targeted governance regimes for the most powerful AI systems being built: large-scale foundation models, frontier models, general-purpose models, large language models, and multimodal systems capable of creating systemic public safety risks.
California Gov. Gavin Newsom signed SB 53 on Sept. 29, 2025, creating what the state described as a landmark AI transparency and safety framework for frontier AI development. New York Gov. Kathy Hochul signed the RAISE Act in December 2025 and later approved amendments in March 2026 that brought New York’s law more closely into alignment with California’s approach.
The practical takeaway is simple: AI governance is becoming operational. Companies developing or deploying advanced AI systems will increasingly need documented safety frameworks, incident reporting workflows, risk assessments, governance controls, and proof that internal AI oversight is more than policy language.
The Shift From AI Ethics to AI Accountability
For years, AI ethics programs were built around principles: fairness, transparency, accountability, explainability, privacy, and human oversight. Those principles still matter, but SB 53 and the RAISE Act move the conversation into a harder compliance environment.
The new question is not whether a company supports responsible AI. The question is whether the company can document, publish, report, and defend how it manages frontier AI risk.
That is a major shift. It means AI governance is beginning to look more like privacy compliance, cybersecurity governance, financial risk management, and product safety regulation.
The most important obligations in these laws focus on:
- Public safety frameworks for frontier AI models
- Catastrophic risk assessment and mitigation
- Incident reporting to state authorities
- Transparency around safety protocols
- Protection for AI whistleblowers
- Documentation of internal governance practices
This is the beginning of a new compliance category: frontier AI safety governance.
What California’s SB 53 Requires
California’s SB 53 is designed to regulate large developers of frontier AI models. The law requires covered developers to create, implement, and publicly publish a frontier AI framework describing how they incorporate national standards, international standards, and industry best practices into their AI safety programs.
The law focuses heavily on catastrophic risks. These are not ordinary product defects or biased model outputs. They are severe risks involving potential mass casualties, critical infrastructure compromise, major cyber misuse, biological or chemical misuse, or extreme economic damage.
California’s model also includes whistleblower protections. Covered employees involved in assessing, managing, or addressing frontier AI safety risks are protected when they disclose information about specific and substantial dangers to public health or safety or violations of the law.
This is one of the most important pieces of SB 53. AI developers are uniquely dependent on internal technical teams to identify risk. If employees cannot safely report concerns, regulators may never see the most important warning signs until after harm occurs.
What New York’s RAISE Act Adds
New York’s RAISE Act follows the same general policy logic: large AI developers must create and publish safety protocols and report serious AI safety incidents. Hochul’s office said the law requires large AI developers to create and publish information about safety protocols and report incidents to the state within 72 hours of determining that an incident occurred.
The most notable development is how quickly New York moved toward California’s model. The amended version of the RAISE Act brought the New York law into closer alignment with California SB 53, reducing some of the multistate compliance uncertainty for large frontier AI developers.
That alignment matters. It suggests California may again become the de facto national standard-setter, a dynamic often called the “California effect.” Once California creates a regulatory model, other states frequently borrow from it rather than reinventing the framework from scratch.
SB 53 vs. RAISE Act: The Emerging State AI Safety Template
| Issue | California SB 53 | New York RAISE Act |
|---|---|---|
| Primary focus | Transparency and safety requirements for frontier AI developers | Transparency, safety frameworks, and incident reporting for frontier AI developers |
| Core policy concern | Catastrophic AI risk and public safety | Catastrophic AI risk and public safety |
| Safety framework | Requires covered developers to publish a frontier AI framework | Requires large developers to publish information about safety protocols |
| Incident reporting | Requires reporting of critical safety incidents | Requires reporting within 72 hours after determining an incident occurred |
| Whistleblower protections | Includes specific protections for covered AI safety employees | Less distinctive than California’s whistleblower framework |
| Regulatory significance | First major U.S. state law directly regulating frontier AI developers | Second major state model, later amended to align more closely with California |
Why These Laws Matter Even If You Are Not OpenAI, Anthropic, Google, Meta, or xAI
At first glance, SB 53 and the RAISE Act appear to apply only to the largest frontier AI developers. That is technically true. But businesses should not make the mistake of treating these laws as irrelevant.
Regulatory ideas often begin with the largest companies and then move downmarket. That happened with privacy. It happened with cybersecurity. It is likely to happen with AI governance.
The standards being created now will influence:
- Enterprise AI procurement requirements
- Vendor due diligence questionnaires
- Board oversight expectations
- Insurance underwriting
- Investor diligence
- AI incident response planning
- Privacy and data governance controls
- State and federal AI enforcement theories
A mid-market company using AI tools may not be a frontier model developer. But it may still be expected to show that AI systems are governed, monitored, documented, and aligned with applicable privacy and safety obligations.
The Privacy Connection: AI Governance Runs on Personal Data
AI regulation cannot be separated from privacy compliance. Foundation models are trained on enormous volumes of data. AI tools often process customer data, employee data, behavioral data, health data, financial data, biometric data, location data, and inferred personal characteristics.
That means AI governance and privacy governance are converging.
For businesses, the real compliance questions include:
- What personal data is used to train or fine-tune AI systems?
- Was consent required?
- Was the data collected under a privacy notice that accurately disclosed AI use?
- Can users opt out of profiling or automated decision-making?
- Are sensitive data categories being processed?
- Are vendors using company data to train their own models?
- Can the business respond to access, deletion, correction, and opt-out requests?
- Can the company prove what data entered the AI pipeline?
This is where companies are most exposed. Many organizations adopted AI tools faster than they updated their privacy notices, vendor agreements, consent flows, data maps, or internal governance procedures.
The New Compliance Standard: AI Governance Must Be Provable
SB 53 and the RAISE Act are part of a larger movement away from voluntary AI principles and toward evidence-based governance.
That means companies will need to maintain records showing:
- How AI systems are evaluated before deployment
- How risks are escalated internally
- How incidents are detected and reported
- How data inputs are governed
- How privacy rights are honored
- How human oversight works in practice
- How vendors are reviewed
- How model outputs are monitored
- How consumer-facing disclosures match actual system behavior
This mirrors the same shift already happening in privacy compliance. Regulators are no longer satisfied with a privacy policy. They want operational proof.
SB 53 and the RAISE Act Compliance Help
Even companies outside the direct scope of these laws should use them as a roadmap. The direction of travel is clear.
- Create an AI governance inventory. Identify every AI system used across marketing, HR, product, analytics, customer support, security, finance, and engineering.
- Map AI systems to personal data. Determine whether each tool processes personal data, sensitive data, customer data, employee data, or regulated data.
- Review vendor AI terms. Confirm whether vendors use customer data for model training, improvement, retention, or secondary purposes.
- Update privacy notices. Disclosures should accurately explain AI-related data uses where required.
- Build incident response procedures. AI incidents should be treated like security and privacy incidents: logged, escalated, investigated, and documented.
- Create internal escalation channels. Employees should know how to report AI safety, privacy, or compliance concerns.
- Document human oversight. If automated systems influence meaningful decisions, companies need clear governance and review controls.
- Maintain audit-ready evidence. AI governance must be documented in a way that can survive diligence, enforcement, litigation, and board review.
Our Perspective: AI Governance Is Becoming Privacy Governance
Our platform was built around the idea that modern compliance cannot depend on static paperwork. Companies need systems that continuously track obligations, enforce choices, update disclosures, and create proof.
That philosophy is directly relevant to AI governance.
As AI laws mature, companies will need to connect privacy operations with AI risk management. That includes consent, data mapping, automated decision-making disclosures, consumer rights requests, vendor oversight, privacy notices, and incident reporting.
The businesses that win this next phase will not be the ones with the prettiest AI ethics statement. They will be the ones that can show:
- What data they collect
- Why they collect it
- Where it flows
- Whether it is used in AI systems
- How users can exercise rights
- How consent and opt-outs are enforced
- How incidents are detected and documented
- How compliance is proven
That is the future Captain Compliance is built for: proof-based privacy and AI governance that can stand up to regulators, customers, investors, and plaintiffs.
The U.S. Is Building AI Law State by State
The United States still does not have a single comprehensive federal AI law. In the meantime, states are moving.
That creates the same dynamic companies already know from privacy law: a patchwork of state requirements that gradually becomes a national operating standard. California sets a model. New York responds. Other states borrow pieces. Enterprises then turn those requirements into vendor expectations.
This is exactly what happened in privacy. It is now happening in AI.
For legal, privacy, security, and compliance teams, the message is clear: waiting for Congress is not a strategy.
Final Take: SB 53 and the RAISE Act Are Early Warning Signals
California’s SB 53 and New York’s RAISE Act are not the final word on AI governance. They are the opening chapter.
They show where regulators are heading: transparency, safety frameworks, incident reporting, whistleblower protection, catastrophic risk governance, and operational accountability.
They also show that AI compliance will not live in a silo. It will merge with privacy compliance, cybersecurity, vendor management, consumer protection, and board-level risk oversight.
The companies that prepare now will have a significant advantage. They will be better positioned for state AI laws, federal enforcement, customer audits, investor diligence, and the inevitable wave of AI-related litigation.
The companies that wait will face the same problem many organizations now face in privacy: outdated policies, undocumented practices, unclear data flows, and no reliable proof when regulators ask hard questions.