Regulators in both France and the United Kingdom have escalated actions against platforms and technologies that produce or host nonconsensual deepfake content. This marks a turning point in how European jurisdictions are using existing criminal law, data protection regulation, and new offence frameworks to hold technology companies and individuals accountable for the dissemination of AI-generated deepfakes—especially sexually explicit and harmful imagery. The focus has fallen squarely on Elon Musk’s social media platform X and its AI tool Grok, which have drawn intense scrutiny for allegedly enabling the creation and spread of nonconsensual deepfake content involving adults and minors.
French Prosecutors Raid X Offices and Summon Leadership
French prosecutors, working with the Paris cybercrime unit and Europol, executed a search of X’s Paris offices as part of a widening criminal investigation into the platform’s operations. The probe, initiated in early 2025, originally targeted alleged algorithm abuse and data mishandling but has expanded to include serious allegations related to the distribution of sexually explicit AI-generated deepfakes, Holocaust denial content, and the manipulation of digital systems. Company owner Elon Musk and former X chief executive Linda Yaccarino have both been summoned for voluntary interviews in April 2026 as part of the inquiry.
The French action underscores how national law enforcement is increasingly willing to apply criminal law to online platforms when nonconsensual and harmful AI-generated content proliferates at scale. French criminal law already criminalizes the sharing of intimate images without consent, and recent legislative developments have clarified and extended these prohibitions to explicitly cover AI-generated deepfakes. The relevant provisions in French law impose significant penalties, including potential imprisonment and fines, especially when content is disseminated via online public communication services.
UK Launches Parallel Probes Under Data Protection and Safety Laws
At the same time, the United Kingdom is pursuing multiple avenues of scrutiny. The UK’s Information Commissioner’s Office (ICO) has opened a formal investigation into X and xAI arising from reports that Grok generated nonconsensual sexual deepfakes, including images depicting minors. These activities may amount to violations of the UK’s data protection regime, which imposes strict requirements on the lawful, fair, and transparent handling of personal data, and treats the creation and dissemination of manipulated intimate imagery as a serious breach of individual privacy rights. :
The UK media regulator Ofcom is conducting its own inquiry focused on whether the platform complied with its duties under the Online Safety Act, particularly in relation to harmful sexual content. Ofcom has authority to require platforms to take action against illegal or harmful material and to impose substantial fines for noncompliance. The British government also recently confirmed that producing or requesting the creation of sexually explicit deepfakes without consent is a criminal offense under updated UK law.
Why Regulators Are Targeting Deepfakes Now
Deepfake technology has evolved rapidly, making it increasingly easy for users to generate realistic AI-manipulated visuals and videos depicting individuals in compromising or intimate contexts without their consent. The scale of misuse—amplified by large user bases on social platforms—has heightened concerns among privacy advocates, child safety experts, and policymakers. Reports indicate that AI tools such as Grok were used to produce thousands of sexually suggestive images per hour, many of which targeted private individuals, women and children. Such activity not only inflicts severe harm on victims but also triggers potential criminal violations under laws governing child sexual abuse material, privacy rights, and hate speech protections.
France’s recent SREN law explicitly extends prohibitions against sharing nonconsensual deepfake content, including severe penalties for sexually explicit versions shared on public platforms. Under this law, creating or distributing such content without consent can lead to imprisonment and substantial fines, reflecting a broader European consensus that digital consent and identity protections must adapt to the capabilities of modern AI.
Industry Responses and Ongoing Legal Challenges
In response to regulatory pressure, X has modified certain features of Grok, limiting image editing capabilities that allowed “digital undressing” and restricting some functions to paying subscribers. However, regulators and policymakers have criticized these measures as insufficient to address the underlying risks or to fulfill legal obligations under data protection and safety laws. The European Commission has also launched a Digital Services Act investigation into X’s handling of harmful AI-generated content, adding a transnational enforcement dimension alongside national probes.
Industry observers note that regulatory action against deepfake producers and platforms hosting them could expand rapidly. The UK’s criminalization of nonconsensual deepfake abuse and France’s enhanced criminal code provisions set legal precedents in major markets. These moves may encourage other jurisdictions to adopt similar frameworks or to apply existing laws more aggressively where harmful AI outputs intersect with privacy, child protection, and online safety violations.
What This Means for Platforms and Users
The increased enforcement activity in France and the UK sends a clear message: platforms and AI developers cannot rely solely on reactive takedown processes or voluntary safety measures. Regulators are signaling that platforms may be held accountable under criminal, privacy, and online safety laws if they fail to prevent the creation and dissemination of nonconsensual deepfake content. For technology companies operating internationally, this means robust compliance frameworks are necessary not just to manage legal risk but to safeguard users and uphold societal trust in generative AI technologies.
As investigations continue and laws evolve, jurisdictions across Europe and beyond are likely to sharpen their legal tools to confront deepfake harms. The cases involving X and Grok may well become landmark regulatory tests for how the law treats AI-generated content and platform responsibility in the age of generative AI.