The Digital Omnibus doesn’t lower the bar on data protection. It moves the goalposts entirely — and every organization processing data in Europe now needs to be able to explain, in writing, why it can’t identify the people in its datasets.
For the better part of a decade, compliance teams working under the EU General Data Protection Regulation have operated with a binary instinct about personal data: either you have it, or you don’t. If there was any realistic path from a dataset back to a living individual — anywhere in the world, by anyone with sufficient resources — most organizations treated the data as personal data and applied GDPR obligations accordingly. It was the safe answer. It was also, as it turns out, not what the law actually requires.
The EU’s proposed Digital Omnibus package is in the process of correcting that misread — not by weakening data protection, but by finally forcing the GDPR’s actual logic into the operative text where it cannot be ignored. The consequences for compliance teams are significant and largely under appreciated.
What Recital 26 Always Said — and Why Nobody Fully Acted on It
Recital 26 of the GDPR has been there since day one. It establishes that data protection obligations apply to personal data — information relating to an identifiable natural person — and that identifiability should be assessed by reference to the means “reasonably likely to be used” to identify someone, taking into account all relevant factors: cost, time, available technology, and the purpose of the processing.
This is a contextual, relative standard. Identifiability is not a universal property of a dataset — it is a property of a dataset in relation to a specific entity with specific capabilities, specific data holdings, and specific intentions. If Entity A cannot realistically identify the individuals in a dataset, the GDPR does not apply to Entity A’s processing of that data — even if Entity B, with different capabilities and different auxiliary information, could make that identification.
That logic has been consistently affirmed by the Court of Justice of the European Union. The Breyer v. Bundesrepublik Deutschland decision applied it directly: dynamic IP addresses were personal data for the internet service provider that could link them to subscribers, but not necessarily for a website operator that had no realistic means of making that connection. EDPS v. Single Resolution Board reinforced the same principle. The relative concept of personal data is not a doctrinal novelty — it has been settled EU case law for years.
And yet, in practice, many organizations and supervisory authorities defaulted to absolutism. If any entity anywhere could identify a person from the data, the data was treated as personal data for everyone. The nuance of Recital 26 never fully migrated from legal theory into operational compliance.
The Case Law Problem the Omnibus Is Fixing
The practical impact of that absolutism was manageable as long as it only affected internal data governance. It became something more serious when recent CJEU decisions — particularly Gesamtverband Autoteile-Handel eV v. Scania and a subsequent reading of the SRB judgment — appeared to introduce a theory of backward attribution: the idea that data could become personal data for a sending entity if the recipient of that data had the means to identify individuals from it.
Read literally, this created a significant compliance problem. A company sharing technical or operational data with a vendor or partner would need to assess that recipient’s identification capabilities before concluding whether the transfer was a transfer of personal data. Given how interconnected modern data ecosystems are — with dozens of vendors, processors, and third-party integrations involved in any given data flow — that assessment burden would be essentially unmanageable at scale.
The EU Data Act made the problem acute. Chapter 2 of the Data Act creates obligations for data holders to make data available to users and third parties under defined circumstances. Much of that data — product usage data, technical logs, industrial telemetry — is not personal data in the hands of the data holder. But if backward attribution logic applied, data holders would need to determine, for every potential recipient, whether that recipient could link the transferred data back to an identifiable person. In practice, that determination would often be impossible to make with confidence. The result: a compliance deadlock where sharing data risked GDPR liability and withholding it risked Data Act sanctions simultaneously.
What the Digital Omnibus Actually Changes
The Digital Omnibus resolves this tension by doing what should have happened years earlier — codifying the relative concept of personal data directly into Article 4(1) of the GDPR, where it carries binding legal force rather than recital-level guidance.
The proposed new language is precise. Information is not personal data for an entity that cannot identify the individual to whom it relates, taking into account the means reasonably likely to be used by that entity. And critically: information does not become personal for a sending entity merely because a subsequent recipient has means reasonably likely to identify the individual.
The Commission has confirmed publicly — including through a representative at the IAPP Europe Data Protection Congress in 2025 — that the third and fourth sentences of the new Article 4(1) are explicitly designed to neutralize the backward attribution problem that the Scania and SRB decisions risked creating in a Data Act context. Other entities’ keys, the Commission is confirming, do not unlock your door.
Notably, the European Data Protection Board and European Data Protection Supervisor’s joint opinion on the Digital Omnibus does not challenge this direction. The EDPB-EDPS response accepts that some data will fall outside the GDPR for some entities and focuses its concern on ensuring that the determinations are made rigorously and are subject to regulatory scrutiny — not on reversing the underlying principle.
This is not deregulation. It is redistribution of accountability. The GDPR’s scope narrows in some contexts, but the compliance obligation to prove that the narrowing applies becomes correspondingly more demanding.
The New Document Every Organization Will Need
Here is where the practical implications for compliance teams become concrete.
Once the Digital Omnibus codifies the relative concept into Article 4(1), organizations will be expected to articulate — in documented form — why the data they process falls outside the GDPR for their specific context. Regulatory assertions that data is “anonymized” or “not personal data” without supporting analysis will carry less and less weight as supervisory authorities turn their scrutiny toward the quality of the reasoning rather than the label applied.
Think of it as the deidentification equivalent of a Transfer Impact Assessment. After the Schrems II decision invalidated the Privacy Shield framework, organizations transferring personal data to third countries were required to produce documented assessments of whether the destination country’s legal framework provided adequate protection — specific to the data, the transfer mechanism, and the recipient. The assessment had to demonstrate actual analysis, not just a conclusion.
A deidentification statement works along similar lines. For each dataset or data processing activity where an organization claims the data falls outside the GDPR — because identification is not realistically possible given the entity’s means — regulators will increasingly expect documentation that addresses:
What categories of information are present in the dataset, and what direct identifiers have been removed or transformed. What additional information the organization does not hold that would be necessary to reidentify individuals. What technical, contractual, and organizational controls prevent the organization from acquiring or developing such information. Why, given the organization’s specific context, purpose, and capabilities, identification is not reasonably likely. How that assessment was reviewed, by whom, and when it was last updated.
The scope of this requirement extends well beyond classic pseudonymization debates. IP addresses, device identifiers, behavioral logs, location data at varying levels of precision, technical product telemetry — all of these categories have historically generated enforcement risk even where Recital 26 and supporting case law supported a non-personal-data conclusion for specific entities. The Digital Omnibus proposal, combined with the Commission’s explicit confirmation of its intent, signals that organizations can now defend those conclusions — but that the defense requires substantive documentation, not just a policy assertion.
Anonymization vs. Deidentification: The Distinction That Now Matters More Than Ever
One implication of the Omnibus proposal that deserves explicit attention is what it does not change. The bar for anonymization — the complete and irreversible removal of all means by which an individual could be identified — remains exactly where it was. Truly anonymized data has never been personal data under the GDPR and continues not to be. That standard is stringent, binary, and rarely achievable in practice for datasets of meaningful complexity.
What the Omnibus changes is the space between true anonymization and obvious personal data. It establishes that this middle ground — data that is not anonymous in the absolute sense but is also not practically identifiable given a specific entity’s realistic capabilities — is a legally cognizable category with defined compliance implications. Organizations do not need to achieve the nearly impossible standard of perfect anonymization to argue that GDPR obligations don’t apply. They need to demonstrate, rigorously, that identification is not realistically available to them.
For compliance teams, this means the investment in anonymization technology and pseudonymization architecture remains valuable — but the documentation and reasoning layer around that architecture becomes equally important. A sophisticated technical implementation that cannot be explained in terms of why it defeats identification for your specific organization, with your specific auxiliary data holdings, is a half-finished compliance posture under the emerging framework.
What Compliance Teams Should Be Doing Now
The Digital Omnibus is still working through the EU legislative process, and the timeline to final adoption involves member state negotiations and European Parliament review. The leaked compromise proposals from February 2026 suggested some member states were pushing to pull back on the revised Article 4(1) definition, indicating the final text could evolve. But the direction of travel — toward codified relative identifiability and documented deidentification analysis — reflects years of CJEU jurisprudence and Commission intent. Even if the final text is modified, the underlying compliance expectation is already forming.
Organizations that want to get ahead of this should be taking several steps now.
Conduct a data inventory specifically focused on datasets currently treated as outside the GDPR. For each one, pressure-test the reasoning: what is the actual basis for concluding that the data is not personal data for your organization? Is that basis documented? Could it withstand a supervisory authority inquiry?
Map the auxiliary information gap. The relative concept turns on what additional information you do not have and cannot realistically access. That negative space needs to be identified and described with specificity — not just asserted.
Review vendor and partner agreements for identification capability clauses. If any of your data processors, recipients, or partners have rights to combine your data with their own datasets in ways that could enable identification, that contractual capability is relevant to your deidentification analysis even if they haven’t exercised it.
Build deidentification statements as a standing document type in your privacy program. These should be living documents, reviewed when the dataset’s use, the organization’s auxiliary data holdings, or the relevant technology landscape changes materially.
Coordinate with your security and engineering teams on technical controls. The strength of a deidentification analysis depends in part on organizational and technical barriers to reidentification — access controls, data minimization practices, separation of datasets. Those controls need to be documented as part of the compliance record, not just implemented operationally.
The Bigger Picture
The Digital Omnibus represents the EU’s recognition that a data protection framework built for the internet of 2018 needs calibration for the data ecosystem of 2026 — one characterized by large-scale technical data sharing, AI training pipelines, interconnected vendor relationships, and the EU Data Act’s affirmative obligations to make data available. Maintaining GDPR coherence in that environment requires a relative, contextual concept of personal data that can be applied consistently across the full range of modern data flows.
That is what Recital 26 always offered. The Omnibus is simply making sure organizations can no longer treat it as optional.
For compliance teams, the practical message is straightforward: the era of asserting that data falls outside the GDPR without supporting analysis is ending. The organizations that build rigorous, documented deidentification practices now will be positioned to operate with confidence in the emerging framework. The ones that don’t will find themselves scrambling to produce reasoning they should have developed years earlier — under circumstances that are unlikely to be favorable.