Brussels’ Digital Omnibus: How the EU Plans to Rewire GDPR and the AI Act

Table of Contents

Was there too much regulation that now the EU is going to pull back on GDPR measures? The European Commission unveiled a legislative package that could quietly reshape the way European digital regulation works in practice. Branded as the Digital Omnibus, this set of proposals sits alongside a new Data Union Strategy and the concept of European Business Wallets as part of a broader “Digital Package.”

The goal is not to tear down the EU’s hard-won protections. Instead, the Commission is trying to do something more delicate: remove regulatory clutter, smooth inconsistencies, and make it easier for businesses to comply with rules on data protection, cybersecurity and artificial intelligence, while keeping the GDPR and AI Act’s core values intact.

The Digital Omnibus comes in two main pieces:

  • A regulation that amends existing instruments such as the GDPR, ePrivacy Directive, NIS2 Directive and the Data Act.
  • A separate omnibus regulation on AI focused on targeted revisions to the EU AI Act.

Both proposals now enter the ordinary legislative process, involving the European Parliament and Council, with the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) expected to weigh in.

Final text is likely months away, but the direction of travel is already clear.

Why the Digital Omnibus, and Why Now?

The Commission’s own diagnosis is blunt: years of new rules, layered on top of each other, have sometimes hurt competitiveness, especially for smaller companies.
The Draghi report on European competitiveness pushed this point hard, warning that Europe’s “growth model is fading” and that inaction risks both competitiveness and sovereignty.

The Digital Omnibus aims to:

  • Reduce compliance costs, especially for SMEs and small mid-cap companies (SMCs).
  • Maintain high fundamental rights protections, rather than quietly diluting them.
  • Boost innovation and competitiveness by removing overlapping or redundant obligations.
  • Codify and harmonize case law and regulatory practice that has emerged since the GDPR and other laws took effect.

The Commission projects that, if the Digital Omnibus is adopted broadly as proposed, businesses and public administrations could save at least €6 billion in administrative costs by 2029.
That figure comes primarily from:

  • Streamlined reporting obligations.
  • Fewer duplicative filings under different digital laws.
  • Lighter-touch regimes where risks are clearly lower.

At the same time, senior EU officials have been careful to emphasize what the Omnibus is not. It is not a wholesale reopening of the GDPR. Instead, it is a method of tightening screws that have loosened, filling gaps that case law exposed, and cutting back obligations that turned out to be more bureaucratic than protective.

Key GDPR-Related Reforms in the Digital Omnibus

Although the GDPR has been in force for years and is one of the most interpreted laws in the EU’s digital toolkit, practice on the ground has diverged. National regulators, courts, and companies have taken slightly different paths on core concepts such as personal data, transparency, and legitimate interest.
The Digital Omnibus tries to pull those paths back together.

1. A More Relative Definition of Personal Data

Under the current GDPR, “personal data” is defined broadly, and debates about pseudonymization versus anonymization have become a recurring headache.
The Omnibus would adjust the definition to emphasize the perspective of the specific controller or processor holding the data.

In practical terms:

  • Information would count as personal data only where the entity has “means reasonably likely to be used” to identify an individual.
  • This reflects key Court of Justice rulings (such as Breyer and the SRB case), where the Court held that data is not necessarily “personal” for every actor in a chain if that actor cannot realistically re-identify the individual.
  • Pseudonymized data might sit outside the GDPR for one organization, even if a different organization could link it back to a person.
  • The Commission would have the power to adopt implementing acts clarifying when pseudonymized data should still be treated as personal data, based on current technologies.

For compliance teams, this shift is both an opportunity and a risk. It may narrow the scope of some obligations, but it will also force organizations to rigorously document their re-identification capabilities and assumptions.

2. Relaxed Transparency Obligations

Article 13 of the GDPR has been interpreted in maximally cautious ways, leading to long, repetitive privacy notices that few people read and that many businesses struggle to keep updated.
The Omnibus expands the circumstances in which controllers can rely on an exemption from individual notice.

Under the proposal, controllers would be able to skip or slim down disclosures where:

  • There are reasonable grounds to expect the data subject already has the information.
  • The processing is not likely to result in a high risk to the data subject.
  • The data is processed for scientific research, and informing individuals directly would be impossible or disproportionately burdensome.
  • Providing detailed notice would jeopardize the objectives of the processing (for example, undermining research design).

In such cases, controllers would still have to provide information indirectly, such as through public notices or easily accessible privacy documentation.

3. Stronger Support for Scientific Research and AI Development

The Omnibus also addresses a long-standing tension between the GDPR’s strict rules and the realities of research and AI model development. The proposal:

  • Defines “scientific research” in a way that can include projects with commercial goals.
  • Confirms that further processing of data for scientific purposes is deemed compatible with the original purpose.
  • Recognizes scientific research as a legitimate interest under Article 6(1)(f).

The text goes one step further by tackling the use of personal data for developing and operating AI models:

  • It confirms that controllers may rely on legitimate interest as a legal basis for AI model training and operation under the GDPR.
  • It makes clear that this pathway is blocked where other EU or national laws explicitly require consent (for example, under parts of the Digital Markets Act).
  • Controllers invoking legitimate interest must still run a robust balancing test, minimize data, and respect the right to object.

In addition, the Digital Omnibus would introduce a specific exception allowing incidental processing of special-category data during AI training where:

  • The controller does not aim to process sensitive data.
  • Residual sensitive data is promptly minimized and removed once identified.
  • Technical measures are taken to prevent and reduce such processing as far as possible.

Member states or EU law could still close this door and insist on consent for certain kinds of AI processing involving sensitive data.

4. Addressing Abusive Data Subject Access Requests (DSARs)

Article 15 DSARs have become a double-edged sword. They are vital for transparency and control, but they are also sometimes used as tactical tools in litigation or employment disputes.
The Omnibus tries to bring some balance back.

The proposal would:

  • Allow controllers to reject or charge a reasonable fee for DSARs that are manifestly unfounded, excessive, or clearly weaponized.
  • Lower the evidentiary burden on controllers to demonstrate that a request crosses that line.

The core right of access remains, but organizations gain more leverage to push back against requests that serve primarily as pressure tactics rather than genuine transparency tools.

5. Clarifying Automated Decision-Making Under Article 22

Automated decision-making and profiling have been one of the most contested parts of the GDPR.
The Omnibus does not dismantle Article 22 but aims to add clarity for controllers and processors.

The proposals:

  • Confirm that fully automated decisions may be lawful when they are necessary for entering into or performing a contract.
  • Reiterate that automated decisions may also be lawful where authorized by EU or member state law, or based on valid consent.
  • Clarify that the existence of a human alternative does not prevent a controller from relying on fully automated decision-making if the legal criteria are met.

The familiar safeguards — such as a right to human review and protections related to special-category data — remain in place.

6. A Single EU Portal for Breach Reporting

Data and security breach obligations have multiplied across different EU instruments. The Omnibus goes for a “submit once, share widely” model by:

  • Creating a single EU-level portal for reportable data and security breaches.
  • Streamlining obligations under the GDPR, NIS2, DORA, CER, and the forthcoming Cyber Resilience Act.
  • Repealing overlapping reporting duties in the ePrivacy Directive.
  • Charging the EDPB with developing a standardized notification template to be adopted via implementing act.

Under the new model, there would be a unified high-risk threshold for notifying both supervisory authorities and affected individuals,
and organizations would have 96 hours from awareness of the breach to report via that single entry point.

7. EU-Wide Standardization of DPIAs

Today, data protection impact assessment (DPIA) obligations are governed partly by EU rules and partly by national DPA lists. The Omnibus tries to simplify that landscape by:

  • Assigning the EDPB responsibility for drafting EU-wide lists of processing operations that do and do not require DPIAs.
  • Repealing provisions that allowed each national DPA to maintain its own lists.
  • Asking the EDPB to produce a standardized DPIA template and methodology, to be adopted via implementing act.
  • Requiring regular updates (at least every three years) to reflect technological change.

For multinational organizations, this shift promises a more consistent, predictable DPIA regime.

8. Reducing Cookie Consent Fatigue

If there is one part of the Digital Omnibus that ordinary users might feel most vividly, it is the effort to overhaul the way cookie consent works.
The Commission’s own figures suggest EU users collectively spend hundreds of millions of hours each year interacting with cookie banners.

The Omnibus proposes, among other things:

  • Moving the governance of personal data processing on and from terminal equipment fully under the GDPR, simplifying the interplay between GDPR and the ePrivacy rules.
  • Reducing situations where consent is required for lower-risk processing, including:
    • Transmitting electronic communications.
    • Providing requested services.
    • Measuring audiences and usage statistics.
    • Maintaining or restoring security.
  • Allowing processing for these purposes to be based on any appropriate GDPR legal basis (including legitimate interest), with attention to:
    • Whether data subjects are children.
    • Their reasonable expectations.
    • The scale and intrusiveness of processing.
    • Whether the processing amounts to continuous monitoring of private life.
  • Requiring that where consent is used, interfaces must offer a clear, single-click option to accept or refuse, reducing dark patterns.
  • Imposing a six-month moratorium on repeat consent requests for the same purpose after a refusal.
  • Introducing universal, settings-based preference mechanisms, allowing users to express consent or opt-out consistently at browser, operating system, or app store level.

The net effect would be fewer banners, less fatigue, and more meaningful user choice — while still keeping organizations on a clear legal hook for how they use data.

Digital Omnibus Changes to the EU AI Act

The AI Act is barely out of the gate — it entered into force in August 2024 — yet the Commission has already found areas where the legal text can be better aligned with the realities of building and deploying AI systems.

1. Sensitive Data for Bias Detection and Correction

Existing provisions allow some high-risk AI systems to use sensitive data in a constrained way to detect and mitigate bias.
The Omnibus would:

  • Extend this allowance more broadly to relevant AI actors, systems and models.
  • Clarify that such processing must be strictly necessary for bias detection and correction, and accompanied by strong safeguards.

2. Shifting AI Literacy Obligations

The AI Act currently expects providers and deployers to ensure that their staff reaches a “sufficient level of AI literacy.”
In practice, this has proven difficult, particularly for smaller organizations.

The Omnibus would:

  • Shift the primary AI literacy obligation to member states and the Commission.
  • Ask public authorities to encourage — rather than impose — AI literacy measures within companies.

3. Lightening the Load for Low-Risk High-Risk Systems

Some systems technically fall into “high-risk” categories in Annex III but, in context, pose little real-world risk.
The Omnibus recognizes this by:

  • Removing the registration requirement in the EU database for such genuinely low-risk systems.

4. More Flexibility for SMEs and SMCs

To avoid turning the AI Act into a barrier to entry for smaller players, the Omnibus:

  • Extends simplified documentation requirements and mitigated penalties from SMEs to SMCs.
  • Aligns the definitions of SMEs and SMCs with prior Commission recommendations.
  • Expands simplified quality management system options beyond microenterprises and clarifies that these systems should be proportionate to company size.

5. Clarifying Conformity Assessments

Where a single AI system falls under both Annex I.A and Annex III, the Omnibus clarifies that:

  • The stricter Annex I.A conformity assessment regime governs, avoiding regulatory arbitrage or confusion.

6. EU-Level AI Regulatory Sandboxes

To support innovation, the Omnibus empowers the Commission’s AI Office to:

  • Launch EU-level AI regulatory sandboxes for certain systems.
  • Require member states to strengthen cross-border cooperation among their national sandboxes.
  • Adopt implementing acts specifying how these sandboxes are created, run, and supervised.

For participating providers, the sandbox would:

  • Offer a controlled environment for development, training, testing and validation of innovative AI systems.
  • Allow for carefully supervised real-world testing plans agreed with competent authorities.

7. Centralized Supervision by the AI Office

The Omnibus would significantly expand the AI Office’s role by granting it exclusive competence for:

  • Annex III AI systems based on general-purpose AI models where the model and system are developed by the same provider.
  • AI systems that constitute or are integrated into very large online platforms or very large search engines, as defined under the Digital Services Act.

The AI Office would also:

  • Conduct premarket conformity assessments for certain high-risk systems subject to third-party assessment.
  • Coordinate closely with national authorities and DSA enforcement to ensure coherent supervision.

8. Extended Timelines for High-Risk AI Obligations

Recognizing that standards and implementation support are still evolving, the Omnibus contemplates:

  • Delaying the practical start of high-risk obligations by up to 16 months beyond the original August 2026 date, tied to the availability of adequate compliance support.
  • A phased application of obligations: a shorter interval for Annex III systems and a longer one for Annex I systems after the Commission confirms that support exists.
  • Backstop dates in 2027 and 2028, ensuring obligations do eventually bite even if standards are delayed.
  • Grandfathering for lawfully placed high-risk systems without significant design changes, avoiding forced re-certification.

9. Six-Month Grace Period for Output Marking

AI systems generating synthetic content (audio, images, video, text) placed on the market before 2 August 2026 would have until 2 February 2027 to:

  • Implement machine-readable detectability or other marking mechanisms for AI outputs.

10. Streamlined Conformity Assessment and Post-Market Monitoring

To reduce duplication, the Omnibus:

  • Allows conformity assessment bodies to file a single application and undergo a single assessment process for designation under both the AI Act and other relevant EU harmonization laws.
  • Removes the obligation to follow a harmonized Commission template for post-market monitoring plans, instead requiring providers to maintain such plans in their technical documentation, guided by Commission recommendations.

Comparison Table: Original vs Digital Omnibus Compliance Landscape

Area Current Framework Digital Omnibus Proposal Compliance Impact
Definition of Personal Data (GDPR) Broad, largely controller-agnostic definition. Pseudonymized data often treated as personal data if re-identification is possible in principle. Relative, entity-focused test: data is personal only where the holder has means reasonably likely to re-identify. Commission can clarify treatment of pseudonymized data. Potentially narrows GDPR scope for some actors; requires careful documentation of re-identification capabilities and technical measures.
Transparency (Article 13) Wide obligation to inform data subjects directly, with limited exceptions. Risk of repetitive notices and banner overload. Expanded exemptions where data subjects likely already have the information or risk is low; more flexibility for research and disproportionate-effort situations. Reduced notice burden, especially for recurring or low-risk processing; increased reliance on layered and indirect transparency methods.
Scientific Research & AI Development Legal uncertainty around commercial research, compatibility of further processing, and legitimate interest basis for AI training. Scientific research explicitly compatible and recognized as a legitimate interest; clearer path for using legitimate interest in AI model development with safeguards. More predictable legal basis for research and AI projects; still requires strong minimization, balancing tests, and handling of objections.
Special Categories of Data in AI Training Strict prohibition absent explicit consent or narrow exceptions; incidental collection is legally awkward. New exemption for residual, unintended processing of special-category data in AI training, subject to prompt removal and technical safeguards. Reduces legal friction for large-scale training datasets; raises expectations around technical controls to detect and strip sensitive data.
Data Subject Access Requests (DSARs) Controllers may refuse or charge for manifestly unfounded or excessive requests, but burden of proof is high and guidance uneven. Easier for controllers to reject or charge for abusive or tactical DSARs; lower burden to prove that a request is excessive or weaponized. Gives organizations more tools to manage DSAR volume; requires internal criteria and documentation to avoid overreach.
Automated Decision-Making (Article 22) General right not to be subject to solely automated decisions with legal or significant effects; legal bases and “necessity” often debated. Clarifies when automated decisions can rely on contracts, law, or consent; confirms that availability of human decision-making does not automatically bar automation. Greater legal certainty for algorithmic decision-making; still requires meaningful safeguards and human review options.
Breach Reporting Fragmented reporting obligations across GDPR, NIS2, DORA, CER and ePrivacy; varied templates and channels. Single EU portal, standardized templates from the EDPB, unified high-risk threshold and a 96-hour reporting deadline. Simplified reporting operations; encourages consolidated incident response workflows across legal and security teams.
DPIAs EU rules plus national DPA-specific lists, leading to uneven obligations across member states. EU-wide lists and standardized methodology from the EDPB; national lists phased out. Harmonized expectations for multinational organizations; easier to design a single DPIA program for the EU market.
Cookie Consent & ePrivacy Frequent banners; overlapping GDPR/ePrivacy rules; consent often required even for low-risk processing. More processing permitted without consent when risk is low; single-click consent/decline; six-month “do not ask again” after refusal; universal preference signals. Fewer banners, more predictable legal bases, stronger scrutiny of dark patterns; engineering work needed to respect browser/OS/app-level settings.
AI Act – AI Literacy Providers and deployers must ensure staff reach sufficient AI literacy, regardless of size. Responsibility shifts to member states and the Commission; companies are encouraged, not mandated, to take measures. Less formal burden on smaller companies; literacy may still become a soft expectation in supervisory practice.
AI Act – SME/SMC Treatment Some flexibilities for SMEs; microenterprises have specific simplified options for quality management systems. Extends benefits to SMCs; broader simplified quality management options; proportionality principle explicitly tied to organization size. Easier compliance runway for smaller and mid-sized AI players; may encourage innovation outside big tech.
AI Act – High-Risk Timelines High-risk rules set to apply on a fixed schedule, with limited flexibility if standards or guidance lag behind. Possibility of up to 16-month delay for high-risk obligations, with phased application and backstop deadlines in 2027–2028. More realistic implementation timelines; reduces risk of “paper compliance” where standards are not yet operational.
AI Act – Supervision & AI Office Shared competence among national authorities; AI Office role still maturing. AI Office gains exclusive competence for certain general-purpose-model-based systems and very large platform/search systems; expanded premarket assessment role. More centralized, consistent enforcement at EU level; likely greater scrutiny of large players and foundation model providers.

What Comes Next for the Digital Omnibus

The Digital Omnibus package is now open for public feedback while translations into all EU languages are finalized.
After that, it will move into the full legislative machinery of the EU, with Parliament and Council negotiating the final compromise text.

In parallel, the Commission’s broader Digital Fitness Check is underway, assessing how all digital laws — including the Digital Services Act and Digital Markets Act — fit together in practice.

That process could ultimately produce additional proposals that either build on or sit alongside the Omnibus reforms.

For now, organizations should:

  • Map where they rely most heavily on consent, legitimate interest, and research exemptions.
  • Review their breach reporting playbooks with a “single portal” model in mind.
  • Evaluate how much pseudonymized data they hold and what re-identification means in their specific technical context.
  • Prepare for a future in which browser-level preference signals and simplified cookie regimes are the norm rather than the exception.
  • For AI developers, track sandbox opportunities and evolving high-risk timelines closely.

The Digital Omnibus is not a revolution, but it is a substantial recalibration of how Europe understands and administers its digital rulebook.
For privacy, security, and AI teams, the next few years will be about learning to operate in this more nuanced — but potentially more workable — compliance environment.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.