Brussels Is Rewriting the Clock on High-Risk AI — and Privacy Teams Are Stuck in the Middle

Table of Contents

If you’re responsible for AI governance in the EU, you probably circled 2 August 2026 on your calendar a long time ago.

That was the date the high-risk provisions of the EU Artificial Intelligence Act were scheduled to apply. Legal teams built roadmaps around it. Product teams worked backward from it. Procurement clauses referenced it. Board decks cited it.

Now that certainty is wobbling.

The European Commission’s proposed EU Digital Omnibus would effectively move the goalposts for high-risk AI systems — not by removing obligations, but by tying their applicability to the availability of harmonized standards and guidance. In practical terms: high-risk requirements would apply only after the Commission confirms the compliance infrastructure is ready.

For privacy and AI governance professionals, this is not a minor procedural tweak. It changes budgeting cycles, vendor due diligence, contract language, and internal risk posture all at once.

The Original Plan: August 2026

When the AI Act was adopted, the compliance runway seemed clear. High-risk systems — those falling under Annex III use cases such as employment, creditworthiness, law enforcement applications, and certain biometric contexts — were set to become enforceable in August 2026.

Organizations began mapping:

  • AI system inventories
  • Risk management frameworks aligned with Article 9
  • Data governance controls
  • Conformity assessment procedures
  • Post-market monitoring processes
  • Technical documentation artifacts

Many teams were already concerned that the standardization process was moving slower than ideal. But at least the deadline was fixed.

The Omnibus Proposal: Conditional Applicability

The Commission’s late-2025 proposal introduces a conditional model. High-risk obligations would apply only once adequate compliance support — harmonized standards, technical specifications, and interpretive guidance — is formally in place.

Under the Commission’s approach:

  • Annex III high-risk systems would become subject to obligations six months after the Commission’s confirmation.
  • Annex I systems would face obligations 12 months after confirmation.
  • Working scenarios in Brussels suggest applicability could shift toward late 2027.

Meanwhile, both the Council and key parliamentary committees have floated fixed alternative dates:

  • 2 December 2027 for Annex III systems
  • 2 August 2028 for Annex I systems

Trilogue negotiations will determine which path prevails. For now, the market lacks clarity.

The Compliance Paradox: Too Early or Too Late?

This uncertainty creates a familiar governance dilemma.

On one hand, delay provides breathing room. High-risk AI documentation and conformity assessment readiness are resource-intensive. Risk classification exercises alone require coordination across legal, security, product, data science, and executive leadership.

On the other hand, postponement injects ambiguity into operational planning:

  • Do you slow internal AI risk framework development?
  • Do you continue heavy investment in conformity assessment readiness?
  • Do you pause procurement safeguards tied to high-risk classifications?
  • How do you justify AI governance staffing when enforcement timing is unclear?

Boards dislike ambiguity. Regulators dislike ambiguity. Customers dislike ambiguity. Yet ambiguity is exactly what the market has.

Registration Rollbacks: Relief or Enforcement Gap?

The Omnibus draft also proposes removing registration obligations for AI providers that do not fall into the high-risk category.

Industry reaction has largely been positive. Fewer registry requirements reduce administrative burden for low-risk deployments.

However, critics argue that easing registration could reduce regulatory visibility and create enforcement blind spots. Without clear registration trails, classification disputes may become more difficult to resolve.

For governance leaders, this tension reflects a broader balancing act: proportional compliance versus traceable accountability.

AI Literacy: A Softening That May Not Matter

The proposal also contemplates softening AI literacy obligations for organizations in scope. Certain policymakers and supervisory authorities have signaled resistance to weakening these expectations.

From a practical standpoint, most mature organizations are not dialing back AI literacy investments regardless of legislative shifts.

AI literacy supports:

  • Risk mitigation
  • Model oversight
  • Vendor evaluation
  • Litigation defense readiness
  • Regulatory credibility

Even if formal obligations soften, operational risk does not.

Why High-Risk Timing Matters Most

High-risk designation triggers substantial obligations, including:

  • Quality management systems
  • Structured data governance controls
  • Human oversight mechanisms
  • Post-market monitoring frameworks
  • Incident reporting processes
  • Detailed technical documentation

For privacy professionals, these obligations echo GDPR-era governance mechanics — but with deeper integration into product design and model lifecycle management.

If applicability shifts to 2027 or 2028, organizations must decide whether to:

  1. Continue building high-risk readiness as originally planned.
  2. Slow investment and wait for clarity.
  3. Adopt a minimum viable compliance posture.

Each path carries trade-offs.

Customers Aren’t Waiting

Regardless of political timing, enterprise customers are already demanding AI governance transparency.

RFPs increasingly require:

  • AI risk classification disclosures
  • Governance framework summaries
  • Bias testing documentation
  • Model monitoring procedures
  • Data provenance explanations

Delays in Brussels do not pause procurement scrutiny in Frankfurt, Paris, or Amsterdam.

Brussels Delays

Until trilogue negotiations conclude, a pragmatic approach makes sense:

  • Maintain high-risk mapping exercises. Inventory systems and classify them under Annex III criteria now.
  • Build documentation iteratively. Do not wait for harmonized standards to structure risk records.
  • Align AI governance with privacy controls. Data minimization, purpose limitation, and retention governance already apply.
  • Continue AI literacy investments. Internal capability reduces exposure regardless of timing.
  • Track developments — but avoid strategic paralysis.

 The Strategic View

Whether high-risk obligations apply in 2026, 2027, or 2028, the trajectory is clear:

  • More transparency
  • Formalized risk management
  • Documented oversight
  • Enforceable accountability

AI governance is not a new discipline for privacy teams. It is an extension of digital responsibility frameworks already in place.

Clarity from Brussels will arrive eventually. Enforcement expectations will follow. The organizations that succeed will be those able to demonstrate that they understood the risk — and acted accordingly — before the clock forced their hand.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.