The European Commission has published the first draft of a Code of Practice on Transparency of AI-Generated Content, marking an important milestone in the implementation of the EU Artificial Intelligence Act. The draft is intended to translate the AI Act’s transparency obligations into practical guidance for organisations that develop, deploy, or distribute generative AI systems.

Rather than introducing new legal duties, the Code of Practice is designed to clarify how existing obligations under Article 50 of the AI Act can be met in real-world settings. It focuses on one of the most visible and politically sensitive aspects of artificial intelligence: the ability of AI systems to generate content that appears indistinguishable from material created by humans.

The draft reflects the European Union’s broader policy objective of strengthening trust in digital technologies while preserving space for innovation. As generative AI becomes embedded in news, entertainment, marketing, education, and public discourse, transparency is increasingly viewed as a foundational safeguard rather than an optional best practice.

Why the Code of Practice Is Being Developed

The AI Act establishes binding transparency obligations for certain categories of AI systems, particularly those that generate or manipulate content that could mislead users. These obligations are aimed at addressing risks such as disinformation, impersonation, and erosion of trust in digital media.

While the legal requirements are set out in the regulation itself, policymakers have acknowledged that the technical and operational details of compliance are complex. Questions remain about how AI-generated content should be marked, how disclosures should be presented to users, and how responsibilities should be shared between AI developers and those who deploy AI outputs.

The Code of Practice responds to this uncertainty. It is intended to act as a practical bridge between the legal text of the AI Act and the day-to-day decisions made by engineers, product teams, publishers, and platform operators. By offering non-binding guidance developed through a multi-stakeholder process, the Commission aims to encourage early alignment with the AI Act’s transparency objectives ahead of full enforcement.

Scope and Structure of the Draft Code

The first draft of the Code of Practice is structured around the transparency obligations contained in Article 50(2) and Article 50(4) of the AI Act. These provisions require that certain AI-generated or AI-manipulated content be identifiable as such, either through technical measures or through clear disclosures to end users.

The draft distinguishes between two primary groups of actors:

  • Providers, meaning entities that develop or place generative AI systems on the market
  • Deployers, meaning organisations that use, publish, or distribute AI-generated content

This separation reflects the AI Act’s layered approach to accountability, recognising that transparency cannot be achieved solely at the point of creation or solely at the point of dissemination.

Transparency Expectations for Providers of Generative AI

For providers of generative AI systems, the draft Code focuses on ensuring that AI-generated outputs can be technically identified as such. This obligation applies across content types, including text, images, audio, and video.

The draft highlights several implementation approaches, including the use of machine-readable markers or metadata that signal the artificial origin of content. These markers are intended to support automated detection tools and downstream transparency efforts by deployers and platforms.

At the same time, the draft acknowledges the limitations of current technology. It emphasises that transparency measures should be proportionate, technically feasible, and aligned with the state of the art. Providers are encouraged to balance robustness against manipulation with practical considerations such as cost, performance, and compatibility across formats.

The Code does not prescribe a single technical standard. Instead, it frames transparency as an evolving obligation that should adapt as detection and marking technologies mature.

Disclosure Obligations for Deployers of AI-Generated Content

Deployers of AI-generated content play a distinct role under the draft Code. Even where technical marking exists at the provider level, deployers are expected to ensure that users are clearly informed when they encounter synthetic or manipulated content in contexts where it could be mistaken for authentic material.

Particular attention is given to so-called deepfakes and other forms of synthetic media that depict real individuals, events, or statements. In these cases, the draft Code stresses the importance of visible, understandable disclosures that alert audiences to the artificial nature of the content.

For AI-generated text addressing matters of public interest, the draft introduces a nuanced approach. Disclosure may not be required where the content has undergone meaningful human editorial review and accountability processes. This reflects an attempt to balance transparency with established practices in journalism, publishing, and content moderation.

Overall, the draft positions deployers as the final point of contact with the public, responsible for contextualising AI-generated material in a way that preserves user understanding and trust.

How the Code Fits Within the AI Act Framework

The Code of Practice is voluntary, but it is closely tied to binding legal obligations. Organisations that align their practices with the Code may find it easier to demonstrate good-faith compliance with Article 50 of the AI Act once enforcement begins.

The draft also anticipates future guidance from the European Commission and the European AI Office, including interpretive documents on transparency, definitions, and enforcement priorities. Together, these instruments are expected to form a coherent compliance ecosystem that supports consistent application of the AI Act across Member States.

Importantly, the Code does not operate in isolation. Transparency obligations under the AI Act intersect with existing EU legal frameworks, including data protection, consumer protection, and audiovisual media rules. Organisations will need to consider how AI transparency measures interact with these parallel regimes.

Challenges Highlighted by the Draft

The draft Code openly acknowledges several unresolved challenges. One is the technical difficulty of ensuring that markers remain intact as content is copied, edited, or transformed across platforms. Another is the risk that overly rigid labelling requirements could be circumvented or misunderstood by users.

Stakeholders involved in the drafting process have also raised concerns about fragmentation if transparency approaches diverge across jurisdictions or sectors. The voluntary nature of the Code is intended to mitigate this risk by encouraging convergence around shared practices rather than rigid mandates.

The draft reflects an understanding that transparency is not a binary concept. Effective implementation will depend on context, audience expectations, and the evolving capabilities of AI systems themselves.

Timeline for Code of Practice

The first draft of the Code of Practice is not the final word. The European AI Office will collect feedback from stakeholders and continue refining the text through additional drafts in 2026. A final version is expected ahead of the AI Act’s transparency obligations becoming fully applicable in August 2026.

During this period, organisations are encouraged to assess their current practices, identify gaps, and begin aligning internal processes with the principles outlined in the draft. Early engagement may reduce friction later, particularly as supervisory authorities begin to develop enforcement expectations.

Generative AI Compliance in the EU

The first draft Code of Practice on Transparency of AI-Generated Content represents a foundational step in translating the AI Act’s transparency requirements into workable guidance. By addressing both technical marking and user-facing disclosure, it recognises that transparency is a shared responsibility across the AI value chain.

As generative AI continues to reshape how information is created and consumed, transparency will play a central role in maintaining public trust and democratic resilience. The draft Code offers an early blueprint for how that goal can be pursued in practice, while leaving room for adaptation as technology and societal expectations evolve.