EU Digital Laws Mapping

Table of Contents

The EU Data Governance Act (DGA) is often summarized as “a data sharing law.” That shorthand is directionally correct but operationally incomplete. The DGA is best understood as a governance layer for making certain forms of data reuse and data sharing more trustworthy, more structured, and more predictable—especially where public-sector data and intermediary-enabled sharing models are involved.

Because real-world data sharing frequently involves some amount of personal data—even when the primary objective is reuse of non-personal data—the DGA’s most important practical question is not “what does it allow?” but “how does it behave when the GDPR is in the room?” The mapping sheet you shared answers this crisply: when personal data is processed in connection with the DGA, the GDPR prevails, including the powers and competencies of GDPR supervisory authorities.

This article translates that mapping into a structured explanation: what the DGA is doing, where the GDPR controls the outcome, and how to design compliant reuse, intermediation, and altruism mechanisms without confusing the two regimes.

How the Data Governance Act Interplays with the GDPR

The DGA creates a framework for certain kinds of data reuse and “data sharing services,” but it does not displace EU data protection law. If personal data is involved, GDPR rules and protections control—and in a conflict, the GDPR prevails.

In practice, the DGA-GDPR interplay clusters into five operational themes:

  • Precedence and definitional alignment: the DGA adopts core GDPR definitions (including “personal data,” “consent,” “data subject,” and “processing”) and defers to GDPR where personal data is processed.
  • Anonymization and secure reuse models: public-sector reuse workflows should anonymize personal data where possible; where anonymization would destroy utility, secure processing environments and GDPR DPIA/prior consultation may become relevant.
  • Data subject rights enablement: a subset of DGA data intermediation services is explicitly oriented toward helping individuals exercise GDPR rights (access, rectification, erasure, restriction, portability).
  • Consent mechanics, including data altruism: the DGA contemplates a standardized data altruism consent form, but any consent involving personal data must satisfy GDPR consent requirements, including the ability to withdraw consent at the level of a specific processing operation.
  • International transfers and government access: the DGA imposes protective measures for international transfer/governmental access to non-personal data that resemble GDPR transfer/access constraints for personal data.

The core principle: GDPR precedence when personal data is involved

The mapping’s central rule is straightforward: whenever personal data is processed “in connection with” the DGA, the GDPR’s rules and protections prevail; in the event of a conflict, the GDPR prevails, and GDPR supervisory authorities retain their powers and competencies.

For compliance teams, this is more than a hierarchy statement. It is an instruction about how to resolve ambiguity. If a DGA-enabled reuse scenario touches personal data—even incidentally—then the analysis must “snap to” GDPR: lawful basis, transparency, purpose limitation, data minimization, rights handling, processor/controller roles, and transfer restrictions. The DGA may still impose additional governance requirements for the reuse channel, but it cannot relax GDPR constraints.

Precedence and shared definitions: why the DGA adopts GDPR concepts

The DGA adopts the GDPR’s core definitions of “personal data,” “consent,” “data subject,” and “processing.” :contentReference[oaicite:9]{index=9} This matters because it prevents an interpretive gap where the same dataset could be treated as “personal data” under GDPR but something else under DGA. It also means that when the DGA uses consent language (including in the altruism context), it is pulling on GDPR’s consent standards rather than inventing a separate consent concept.

Practically, definitional alignment reduces the risk of “compliance arbitrage,” where organizations might otherwise seek to re-label personal data activities as DGA activities to avoid GDPR. The mapping makes clear that this is not how the ecosystem is supposed to function.

Anonymization, reuse of protected public-sector data, and secure processing environments

The mapping highlights an operational path that many organizations will recognize as the hardest part of privacy engineering: data that is valuable for reuse often becomes much less valuable if fully anonymized.

In the DGA context, public sector bodies that decide to grant access for reuse of protected data should anonymize personal data. That’s the baseline. But the mapping also acknowledges the reality that anonymization or modification may make data unusable for the re-user.

Where anonymization destroys utility, the mapping points toward an alternative architecture: allow on-premises or remote reuse of non-anonymized data within a secure processing environment, provided GDPR risk controls are satisfied—specifically referencing the possibility that a data protection impact assessment (DPIA) and prior consultation with a supervisory authority may need to be carried out in order to allow such reuse.

Read operationally, this suggests a structured decision tree for public-sector reuse that aligns with GDPR’s risk-based governance model:

  1. Start with anonymization. If the reuse objective can be met with anonymized personal data, prefer anonymization and document the transformation approach.
  2. If anonymization breaks the use case, assess whether a secure processing environment can reduce risk to a minimal level. The idea is to permit controlled access without handing over raw, fully exposed datasets.
  3. Run GDPR governance steps appropriate to the risk profile. Where required, conduct a DPIA and consider prior consultation pathways, especially for high-risk processing or where residual risk remains.

For privacy lawyers, the doctrinal importance is that the DGA does not create a “public sector reuse exception” to GDPR. Rather, it encourages reuse through governance mechanisms that still sit inside GDPR’s risk architecture when personal data is present.

Data intermediation services and data subject rights: the “agency” model

A distinctive feature in the mapping is the recognition of a category of DGA data intermediation services that is designed to enhance the agency of data subjects and individuals’ control over data relating to them.

In other words, some intermediaries are not just “data pipes.” They are rights-enablement infrastructure. The mapping states that DGA data intermediation services may offer assistance to data subjects in giving and withdrawing consent and in exercising GDPR rights to access, rectification, erasure (right to be forgotten), restriction of processing, and data portability.

This matters for enforcement and compliance in three ways:

  • Rights workflows become part of the product surface. If an intermediary offers rights enablement, regulators will likely expect the workflows to be accurate, timely, and not misleading.
  • Role clarity becomes essential. An intermediary that “assists” in rights exercise may still be a processor in some contexts and a controller in others; the mapping does not resolve that automatically, but it signals that GDPR rights are not optional in this channel.
  • Consent and withdrawal must be real, not performative. If an intermediary advertises consent toggles, a mismatch between the UI and backend processing will invite regulatory attention.

Consent and data altruism: standardization without downgrading GDPR

The mapping references an EU-level data altruism consent form to be developed by the European Commission in consultation with the European Data Protection Board and the European Data Innovation Board to facilitate the collection of data based on data altruism.

However, the mapping is explicit about the constraint: whenever personal data is collected via the data altruism consent form, the data altruism organization should ensure that data subjects can give and withdraw consent from a specific data processing operation in compliance with the GDPR.

Two compliance implications follow:

  • Granularity is non-negotiable. “Altruism” as a purpose label does not justify blanket, non-specific consent. Withdrawal must be operationally linked to the processing operation, not merely a membership or program-level toggle.
  • Standard forms do not immunize bad implementations. Even a Commission-developed form does not override GDPR consent quality requirements; it is a facilitation tool, not a legal shortcut.

International transfers and government access: convergence across personal and non-personal data

The mapping notes that public sector bodies, data intermediation service providers, and data altruism organizations must take “all reasonable technical, legal and organizational measures, including contractual arrangements,” to prevent international transfer or governmental access to non-personal data where such transfer or access conflicts with EU or national law.

It further explains that the DGA’s requirements around international transfer of and access to non-personal data resemble the requirements imposed by the GDPR on international transfer and access to personal data.

That resemblance is strategically important. It reflects an EU policy trend: the EU’s sovereignty and rule-of-law concerns do not stop at personal data. Certain categories of non-personal data—especially where public-sector “protected data” is concerned—can trigger similar governance instincts, even if the doctrinal basis is not identical to GDPR.

For practitioners, this is a signal to align transfer/access controls across datasets rather than running two inconsistent programs: one “strict” program for personal data and one “loose” program for non-personal data. In DGA-covered contexts, non-personal data may carry transfer/access constraints that feel GDPR-like, and organizations should plan accordingly.

A practical compliance blueprint: translating the map into controls

If you are building a DGA-relevant reuse or sharing program, the following control areas tend to be the make-or-break points in audits and enforcement inquiries:

  • Data classification that is realistic, not aspirational: Identify whether personal data is present, likely present, or inferentially recoverable. If personal data is in scope, anchor to GDPR first.
  • Anonymization governance: Document anonymization decisions; when anonymization breaks utility, document why and pivot to secure processing environments with GDPR risk controls.
  • DPIA and prior consultation readiness: Build a repeatable DPIA workflow for reuse scenarios where residual risk is non-trivial, especially for remote/on-premises secure reuse of non-anonymized data.
  • Rights enablement design: If you offer a “rights dashboard,” ensure it maps to actual backend processing states, with verifiable withdrawal and access/erasure/portability execution paths.
  • International access and transfer safeguards: Apply GDPR-grade thinking to DGA-sensitive non-personal data where transfers or foreign government access could conflict with EU or Member State law.

Common enforcement narratives this mapping supports

For litigators and enforcement teams, the mapping supports a handful of recurring arguments and investigative angles:

  • “GDPR-first” framing: If personal data is involved, attempts to treat the DGA as a permissive basis for reuse should be tested against GDPR requirements and supervisory authority competence.
  • Consent integrity: Data altruism mechanisms and intermediary consent tooling must preserve GDPR-grade consent and withdrawal—particularly the ability to withdraw from a specific processing operation.
  • Secure reuse as a substitute for anonymization only when risks are minimal: Where anonymization is skipped, the security environment and risk governance (including DPIA/prior consultation where required) become the compliance fulcrum.
  • Transfer/access safeguards beyond personal data: The DGA’s “reasonable measures” language for non-personal data can be used to evaluate cross-border data governance even when GDPR does not formally apply.

Mapping Framework

The mapping sheet’s message is intentionally simple: the DGA is an enabling framework for data sharing and reuse, but it is not a detour around GDPR. Where personal data appears in DGA-enabled activities, GDPR rules, protections, and supervisory authority powers prevail.

What makes the mapping practically valuable is that it also points to the compliance engineering reality: anonymize where possible; where anonymization breaks the use case, consider secure reuse environments under GDPR governance; treat rights enablement as a first-class product obligation; and apply robust transfer/access safeguards even for DGA-sensitive non-personal data.

In other words, the DGA does not replace the GDPR; it operationalizes data sharing channels that must remain intelligible to the GDPR’s logic whenever personal data is in play.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.