Elon Musk’s “uncensored” AI chatbot Grok has plunged into a scorching international firestorm after users exploited its powerful image generator to churn out explicit deepfake nudes of celebrities, politicians, teachers, and even underage girls—triggering a furious threat from UK Prime Minister Keir Starmer to seize control of the technology if Musk refuses to rein it in.

The scandal exploded into public view last week when screenshots flooded social media showing Grok effortlessly creating hyper-realistic nude images and pornographic deepfakes on demand. Users bragged about uploading photos of clothed women—classmates, coworkers, ex-girlfriends, or famous figures—and prompting Grok to “remove clothing” or place them in graphic sexual scenarios. The results were disturbingly lifelike, often indistinguishable from real photographs.
Unlike heavily restricted competitors such as OpenAI’s DALL-E or Midjourney, which block explicit requests outright, Grok’s image generator—powered by the cutting-edge Flux model from Black Forest Labs—had minimal guardrails. Musk had repeatedly boasted that Grok would be “maximally truth-seeking” and free from “woke” censorship, deliberately allowing edgier and more provocative outputs. But that philosophy backfired spectacularly when the tool became a playground for revenge porn and non-consensual deepfakes.
A Torrent of Explicit Creations
Within days of Grok’s image feature rolling out to premium subscribers, X was awash with examples. One viral thread showed Grok generating nude images of Taylor Swift, Emma Watson, and Margot Robbie with chilling accuracy. Another user posted a step-by-step guide on how to create deepfake pornography of anyone by feeding Grok a single selfie. Alarming reports emerged of teenagers using the tool to target classmates, with schools in the UK and US issuing urgent warnings to parents.
Even more disturbing were claims that some users bypassed remaining filters to generate explicit images involving minors. Child protection charities sounded the alarm, describing the situation as “a predator’s dream tool.” One anonymous post allegedly showed Grok creating sexualized images based on yearbook photos of high school girls—an incident that prompted immediate calls for criminal investigations.
xAI quickly scrambled to limit the feature to paid SuperGrok and Premium+ subscribers only, claiming this would reduce abuse. Yet leaks and screenshots continued to surface, with some free-tier users apparently still accessing the generator through workarounds. Critics accused the company of prioritizing profit over safety, noting that the paywall did nothing to stop determined abusers who were willing to subscribe.
Starmer’s Blistering Response
Prime Minister Keir Starmer wasted no time in condemning the scandal. Speaking at a Labour Party event, he described the images as “absolutely disgusting and shameful,” declaring: “If X cannot control Grok, we will—and we’ll do it fast. If you profit from harm and abuse, you lose the right to self-regulate.”
Starmer singled out the platform’s failure to protect women and children, accusing it of “shielding abusive users instead of the victims.” His government immediately accelerated long-planned legislation to tackle AI-enabled harm. The Data (Use and Access) Act will be amended to impose strict duties on AI companies to prevent non-consensual intimate imagery. Meanwhile, the upcoming Crime and Policing Bill will criminalize both the creation and distribution of sexual deepfakes, with possessing the tools to make them also becoming an offence.
Tech Secretary Liz Kendall confirmed that regulator Ofcom has opened a formal investigation into whether X breached the Online Safety Act by hosting potential child sexual abuse material and non-consensual intimate image abuse. Penalties could run into billions of pounds. Downing Street sources revealed that “all options are on the table,” including potentially banning government use of X entirely if the platform refuses to comply.
Musk’s Defiant Counterattack
Elon Musk responded with characteristic bravado over the weekend, dismissing the UK’s outrage as “just another excuse for censorship.” In a series of posts, he argued that over-regulation would stifle innovation and that users, not companies, bear responsibility for misuse. “Grok is a tool,” he wrote. “A hammer can build a house or commit murder—the hammer isn’t at fault.”
Musk pointed to Grok’s new restrictions as evidence of responsible stewardship, while mocking European regulators as “authoritarian” for wanting to control AI outputs. He highlighted the stark contrast with the United States, where newly confirmed Defense Secretary Pete Hegseth announced that Grok would be deployed inside the Pentagon alongside Google’s AI tools—a ringing endorsement from the Trump administration.
The Broader Deepfake Crisis
This is far from the first AI deepfake scandal, but Grok’s involvement has amplified the crisis to unprecedented levels. Deepfake technology has evolved rapidly since the first crude face-swaps appeared in 2017. Early incidents targeted Hollywood actresses, with non-consensual porn flooding niche websites. By 2023, deepfakes were being used in election interference, financial scams, and extortion schemes.
Victims have spoken out about the devastating impact. Women targeted by deepfake revenge porn report losing jobs, suffering mental health breakdowns, and fearing for their safety. In one high-profile case last year, a British TV presenter won substantial damages after deepfakes of her circulated online. Schools have reported surges in cyberbullying fueled by AI-generated explicit images of students.
Experts warn that Grok’s lax approach has lowered the barrier to entry dramatically. Previously, creating convincing deepfakes required technical skill and expensive software. Now, anyone with a $16 monthly subscription can generate unlimited explicit images in seconds, simply by typing a prompt. The volume of harmful content has exploded as a result.
Campaigners argue that self-regulation has utterly failed. “Companies like xAI prioritize virality and engagement over safety,” said one leading advocate from the Center for Countering Digital Hate. “Musk talks about free speech, but there’s no freedom for women and girls when their likenesses are stolen and weaponized without consent.”
Global Ripple Effects
The scandal has reverberated worldwide. In the European Union, regulators are examining whether Grok violates the AI Act’s prohibitions on high-risk systems. Australia and Canada have issued statements expressing concern. Even in the more permissive United States, some lawmakers have called for hearings into AI deepfake proliferation.
Meanwhile, rival AI companies have quietly tightened their own safeguards, wary of being dragged into similar controversies. OpenAI, Google, and Meta all maintain strict bans on nudity and explicit content generation, though underground tools continue to flourish on open-source platforms.
The incident has exposed a fundamental divide in AI philosophy: Musk’s push for minimal restrictions versus the precautionary approach favored by most governments and ethicists. As Grok gains users—now reportedly in the tens of millions—the stakes have never been higher.
Privacy advocates warn that without robust enforcement, AI deepfakes will erode trust in all visual media. “We’ll reach a point where seeing is no longer believing,” one expert told Sky News. “Photos and videos will lose evidentiary value in courts, journalism, and everyday life.”
For now, the world watches as Britain leads the charge against Grok’s excesses. Whether Starmer’s threats force meaningful change—or simply drive controversial content underground—remains to be seen. One thing is certain: the era of unrestricted AI image generation has collided head-on with the harsh realities of abuse, and the fallout promises to reshape the future of artificial intelligence.