In 1903, the Wright brothers flew 120 feet at Kitty Hawk. By 1929, commercial aviation was booming and terrifying in equal measure: roughly one fatal accident occurred for every million miles flown. At today’s flight volumes, that rate would produce around 7,000 fatal crashes per year.
What changed aviation was not a thicker rulebook. It was operational infrastructure: airworthiness inspections, standardized pilot licensing, radio navigation beacons, and systematic accident investigation. The rules defined what should happen. The systems made it happen.
AI governance today is standing at a similar crossroads. Organizations have written policies. Some have even started compliance programs. But the gap between having documentation and having working safeguards is where real risk lives, and that gap is quietly widening.
The Regulatory Environment Is Moving Fast
For most of the past decade, AI operated in a near-regulatory vacuum. That has changed sharply. The EU AI Act, passed in 2024, began enforcing provisions on prohibited AI practices in February 2025. Penalties for violations can reach 35 million euros or 7% of global annual turnover, whichever is higher. High-risk system provisions continue rolling out through August 2027.
In the United States, federal inaction has pushed states to act independently. In 2025 alone, 47 states introduced 260 AI-related measures, with 22 becoming law. The resulting patchwork includes:
- Colorado’s AI Act (effective June 30, 2026): Requires documented impact assessments for high-risk AI systems in employment, housing, credit, education, and healthcare.
- Texas Responsible AI Governance Act (effective January 1, 2026): Addresses discrimination risks embedded in AI decision systems.
- California’s employment discrimination regulations (effective October 1, 2025): Requires bias testing of automated hiring tools and four-year record retention.
- California SB 243 (effective January 1, 2026): Creates a private right of action for chatbot-related harm up to $1,000 per violation, with disclosure obligations.
- New York RAISE Act (pending signature): Targets frontier AI models with transparency and risk management requirements.
The regulatory momentum is real. But compliance with a law is a floor, not a ceiling. Organizations that treat regulatory deadlines as the primary driver of their governance efforts are measuring the wrong thing.
Key insight: Regulations tell you what outcome is required. Governance is the infrastructure that reliably produces that outcome. You can satisfy one without building the other, but not for long.
Three Lessons Organizations Learned the Hard Way Last Year
Lesson 1: A Policy Document Is Not a Control
The IBM 2025 Data Breach Report drew a sharp line between organizations with AI governance intentions and those with operational safeguards. Among companies that reported AI-related breaches, 97% lacked functional AI access controls. Nearly two-thirds had no governance policies at all. Most striking: even among companies that had policies, only 34% conducted regular audits to catch unauthorized AI use.
That last figure deserves attention. Having a policy and not auditing it is not governance. It is aspiration with paperwork attached.
Effective governance requires the same shift aviation made: from describing what should happen to building systems that make violations detectable and correctable before damage occurs. NIST’s AI Risk Management Framework provides a practical structure for this through four operational functions:
- Govern: Assign clear ownership over AI risk decisions. Define which risk levels require executive sign-off versus team-level authority. Document this in writing, not just in slide decks.
- Map: Build and maintain a live inventory of every AI system in use, what data each accesses, and what decisions each influences. You cannot manage what you have not catalogued.
- Measure: Conduct specific risk assessments for each system. Does your hiring tool have documented bias testing? Does your customer-facing chatbot have hallucination thresholds? What triggers a human review before a decision executes?
- Manage: Deploy technical controls that enforce your policies automatically. Access logging, network monitoring for unauthorized tools, output validation, and automated approval gates for high-stakes decisions are not optional extras. They are the mechanism by which a policy becomes a practice.
Organizations today need to ask a direct question: if someone in our organization violated our AI policy today, would we know? If the answer is not yes with specifics, the governance program is incomplete.
Lesson 2: Fragmented Ownership Produces Accountability Gaps
The IAPP and Credo AI surveyed 670 organizations across 45 countries and found that 77% were actively working on AI governance. Among companies already deploying AI, that figure was nearly 90%. Almost half rated it a top-five strategic priority. These are encouraging numbers.
The structural picture, however, was more complicated. Half of AI governance professionals were distributed across ethics, compliance, privacy, and legal teams without unified coordination. Reporting lines fractured across general counsel (23%), the CEO (17%), and the CIO (14%). Only 39% had established formal AI governance committees. And 98.5% said they needed more dedicated AI governance staff.
The practical consequence is predictable. When a product team flags a concern about a new AI feature, who makes the call? When a vendor discloses a model update that changes data handling, who reviews the implications? When a risk assessment flags a hiring tool for bias exposure, who has the authority to pause the deployment?
Without clear answers to these questions, decisions either stall or default to whoever is most willing to take responsibility, which is rarely the person with the clearest picture of the full risk.
Solving this does not require a massive reorganization. It requires four structural commitments:
- Designate a single decision owner for AI risk. Whether that person carries the title of Chief AI Officer, CPO, or something else is secondary to the fact that one person has final authority. IBM’s 2025 study found organizations with dedicated AI leadership report approximately 10% higher return on AI investment, suggesting accountability and performance are connected.
- Define risk thresholds explicitly. Document which categories of AI risk require executive approval versus team-level resolution. Ambiguity about this is itself a risk.
- Form a cross-functional AI governance committee with a clear chair. Privacy, legal, security, and technology all contribute essential context, but the committee needs someone who can call a decision and hold it.
- Publish escalation paths for product teams. People working on AI products need to know exactly where to go when a risk review surfaces a problem, how quickly a decision will come, and who has the authority to stop a deployment.
Lesson 3: Annual Vendor Reviews Cannot Track Real-Time AI Changes
Third-party risk management has traditionally operated on annual assessment cycles. A vendor answers a questionnaire, an audit occurs, a risk rating is assigned, and the file is closed for twelve months. This model was built for a world where vendor capabilities changed slowly.
AI is dismantling that assumption. A vendor that passes a thorough audit in January may have deployed new AI capabilities by March that materially alter how your data is processed, who can access it, and what automated decisions it informs. Organizations that discovered this in 2025 typically found out through a breach notification or a news story, not their own monitoring.
Continuous and adaptive vendor oversight now requires three capabilities that traditional programs often lack:
A complete inventory of third-party AI use
Organizations need to know which vendors embed AI in their products, which use AI to deliver services, and what data each AI system accesses. This requires proactive disclosure requirements in vendor contracts, not just retrospective questionnaires. Require vendors to notify you before deploying new AI capabilities that touch your data, not after.
AI-specific risk assessment criteria
Standard audits were not designed to catch the risks that AI introduces. Effective vendor assessments need to address: What data was used to train the model? Has the vendor conducted independent bias testing? How are authentication and access controls secured for AI integrations? What controls prevent unauthorized AI deployments within the vendor’s own organization?
Require evidence of these controls during contract negotiations. Bias testing reports, security audit documentation for AI components, and documented incident response procedures for AI failures should be standard deliverables, not optional disclosures.
Monitoring that operates between audits
Verizon’s 2025 Data Breach Investigations Report found that only one-third of organizations continuously monitor vendor relationships, even as 57% cited operational disruption as their top third-party concern. The disconnect between stated concern and actual monitoring practice is where exposure compounds.
Continuous monitoring means flagging unusual data access patterns, excessive API query volumes, and integration behaviors that deviate from established baselines. It does not require a large team. It does require that the capability exists and that someone owns the responsibility for acting on its alerts.
What Effective AI Governance Looks Like in Practice
The organizations that navigated 2025 without significant AI-related incidents shared a common characteristic: they had closed the gap between their governance documentation and their operational controls.
Their policies were backed by technical systems that caught violations before they caused damage. Their accountability structures were clear enough that decisions could be made quickly when risks surfaced. Their vendor oversight operated continuously, not in annual snapshots.
None of this is simple. Building operational AI governance takes time, expertise, and organizational will. But the alternative, waiting for an incident to reveal the gap between your policy and your practice, is a strategy with increasingly expensive consequences.
The airline industry became safe not because regulations got stricter, though they did, but because the operational infrastructure to fulfill those regulations was actually built. That is the work AI governance now requires.
AI Governance Starting Points
- Can you name every AI system currently in use across your organization, including tools employees have adopted without formal approval?
- If an AI-related incident occurred today, who would be notified first, and who would have the authority to act?
- When did you last review what AI capabilities your top ten vendors have deployed in the past six months?
If any of these questions does not have a confident, specific answer, that gap is worth addressing before a regulator, a breach notification, or a news story closes it for you.
Applicable frameworks: NIST AI Risk Management Framework (AI RMF 1.0) | EU AI Act (Regulation 2024/1689) | ISO/IEC 42001:2023 AI Management Systems | IAPP/Credo AI Governance Profession Report 2025