At its core, the code tackles a growing problem: the explosion of realistic AI-created or manipulated videos, audio clips, images, and text that can mislead viewers. Think election interference, revenge porn, or celebrity impersonations gone viral. While outright illegal deepfakes are handled under other laws, this code targets the lawful ones—the memes, parodies, marketing materials, or artistic experiments—that still risk confusing or harming people if not properly disclosed.
The code isn’t mandatory, but signing on to it offers companies a safe harbor of sorts. Follow its guidelines, and you’re more likely to comply with the binding transparency rules in the AI Act. Ignore it, and you might face scrutiny later. As the draft stands now (with finalization expected around May or June 2026), it provides detailed, practical steps that go beyond vague principles. Let’s break it down.
Background: The EU AI Act and the Need for Clear Rules on Synthetic Content
The EU Artificial Intelligence Act, passed in 2024 and rolling out in phases, is the world’s first comprehensive AI law. It categorizes AI systems by risk level and imposes strict requirements on high-risk applications. For generative AI—the kind that spits out deepfakes—one key obligation falls under Article 50: transparency for AI-generated or manipulated content.
Specifically:
- Users must be informed when they’re interacting with or viewing synthetic media.
- Providers of general-purpose AI models (think OpenAI, Google, or Mistral) have to build in technical features that make detection possible.
- Deployers—the businesses or individuals actually using the AI to create and share content—bear the brunt of disclosure duties.
But the Act itself is high-level. It doesn’t spell out how to label a deepfake video or embed metadata in an AI-generated song. That’s where the Code of Practice comes in. Developed through stakeholder consultations, it translates legal obligations into actionable standards. It also draws on earlier efforts, like the strengthened Code of Practice on Disinformation under the Digital Services Act (DSA).
The DSA already forces large platforms to moderate illegal content and disinformation, including deepfakes used for harm. Non-consensual intimate images, for instance, must be taken down quickly. But for content that’s technically legal yet potentially deceptive, labeling is the preferred approach. The new code builds on that distinction.
Who Has to Do What? Providers vs. Deployers
One of the code’s strengths is its clarity on roles. Not everyone in the AI chain has the same responsibilities.
Responsibilities for AI Providers
These are the companies developing or marketing the underlying models:
- Implement machine-readable marking techniques, such as watermarking or metadata embedding.
- Ensure generated content is detectable by automated tools.
- Support interoperability so detection works across different systems.
- Regularly test and improve these features as technology evolves.
Providers aren’t expected to police every output—that would be impossible at scale—but they must give downstream users the tools to handle transparency properly.
Responsibilities for Deployers
This group includes anyone using the AI to create and distribute content (app developers, marketers, creators):
- Identify when content is AI-generated or significantly altered.
- Apply visible or audible disclosures to inform audiences.
- Use a mix of automated detection and human review, especially for nuanced cases.
- Consider context: audience vulnerability, distribution channel, and potential impact.
The code emphasizes that disclosure should happen early—ideally before or at the moment of first exposure—and be hard to miss.
Practical Labeling Requirements: Modality by Modality
The draft code gets remarkably specific about how to label different types of content. It proposes a standardized approach built around a common visual icon (details to be finalized) plus tailored disclaimers. Here’s a breakdown:
For Video Content
- Real-time video (e.g., live deepfake calls or streams): A persistent but non-obtrusive icon on screen, plus an initial verbal or text disclaimer.
- Pre-recorded video: Opening disclaimer, ongoing icon, and closing credits acknowledging AI involvement.
For Images and Static Media
- A fixed, clearly visible icon placed prominently (no requiring clicks or hovers).
- Metadata embedding where possible for downstream detection.
For Audio-Only Content
- Audible disclaimers at the start, repeated periodically for longer pieces.
- If there’s any visual element (like a podcast waveform), combine with on-screen cues.
For Multimodal or Mixed Content
- Layer multiple methods: visible icons, audio warnings, and text overlays as appropriate.
- Ensure the disclosure is “distinguishable and timely.”
These rules aim for consistency across platforms while allowing flexibility for technical constraints.
Handling Exceptions: Art, Satire, and Creativity
Not every deepfake needs heavy-handed labeling. The code recognizes that overzealous disclosures could stifle expression. For artistic, satirical, fictional, or creative works:
- Disclosures should be minimal and non-intrusive.
- Avoid placements that ruin immersion or artistic intent.
- Still protect third-party rights—e.g., if a deepfake depicts a real person without consent in a harmful way, stronger safeguards apply.
Examples might include:
- A satirical political video with a subtle end-credit note rather than a flashing banner.
- An AI-assisted film disclosing generation in credits, like traditional VFX acknowledgments.
The goal is balance: inform viewers without killing the joke or the magic.
Technical Challenges and Detection Strategies
Labeling sounds straightforward, but the code acknowledges real hurdles:
- Technical feasibility: Watermarking works well for images but struggles with heavily edited video or compressed social media uploads.
- Evasion risks: Bad actors can strip metadata or alter content to remove marks.
- Context matters: The same deepfake might need different treatment if shared in a news context versus a meme group.
Signatories must combine tools:
- Automated detection classifiers.
- Human oversight for edge cases.
- Regular audits and updates.
The code also pushes for industry-wide interoperability standards so one company’s watermark can be read by another’s detector.
Broader Implications and Concerns
This code doesn’t exist in a vacuum. It’s part of a layered EU approach:
- AI Act: Regulates the technology itself.
- DSA: Holds platforms accountable for hosting harmful content.
- GDPR: Protects personal data used in training or depicted in outputs.
Together, they create a robust framework—but gaps remain. Illegal deepfakes still rely on rapid takedowns rather than labeling. And emerging threats, like increasingly sophisticated voice clones or real-time manipulation, test current tools.
Some critics worry about recent political shifts. The European Commission’s “Omnibus” package, aimed at simplifying regulations, could delay parts of the AI Act or loosen rules on training data. That might flood the market with more powerful generative models, making deepfakes harder to spot even as labeling improves.
On the positive side, voluntary adoption could set global precedents. Companies operating worldwide might implement these standards everywhere to avoid fragmented compliance. It also encourages innovation in detection tech—startups building better watermarking or provenance tools could thrive.
For everyday users, the biggest win would be reduced confusion. Imagine scrolling social media and reliably spotting AI satire versus real footage during an election. Or knowing that viral celebrity endorsement video is synthetic before sharing it.
Timeline and Next Steps
The road ahead is clear but tight:
- Current draft released December 2025.
- Stakeholder feedback period through early 2026.
- Final code expected May–June 2026.
- AI Act transparency obligations binding from August 2026.
- Ongoing updates as technology and threats evolve.
Companies are already preparing. Major providers have participated in drafting, suggesting buy-in from industry giants.
A Step Forward, But Not the Finish Line
The EU’s Code of Practice on deepfake labeling represents pragmatic regulation at its best—detailed enough to guide action, flexible enough to adapt. It won’t eliminate deception overnight, but it raises the bar for responsible AI use.
For creators and platforms, it’s a call to build transparency in from the start. For regulators, it’s proof that self-regulatory codes can work when backed by real law. And for the public, it’s a small but meaningful shield against a world increasingly filled with convincing fakes.
As generative AI races ahead, tools like this code remind us that technology governance isn’t about stopping progress—it’s about steering it toward trust and accountability. Whether the final version delivers on that promise will depend on how seriously companies take implementation and how quickly detection tech catches up to creation tech.
One thing is certain: deepfakes aren’t going away. But with thoughtful rules like these, we might at least know when we’re looking at one.