Efforts by European regulators to rein in artificial intelligence–generated deepfakes scored a rare early victory this week after xAI moved to curb the creation of sexualized images by its Grok chatbot. Yet officials and legal experts say the wider regulatory fight against AI-driven abuse is far from settled.
xAI said late on Wednesday it had restricted image-editing features for Grok users after the chatbot produced thousands of sexualized images of women and minors, triggering global backlash. The move marked a reversal for billionaire owner Elon Musk, who initially downplayed the controversy.
Regulators say the episode underlines how difficult it is to police AI tools that make the creation of explicit or degrading content fast, cheap and scalable. It is the latest flashpoint between Musk and European authorities, following earlier disputes over election interference, content moderation and free speech on X.
Legal uncertainty remains widespread. Many governments are still refining rules on what constitutes nudity, how consent should be defined in AI-generated content, and whether responsibility lies with users or platforms. “It’s really a grey zone with regards to the creation of nude images,” said Ängla Pändel, a data protection and privacy lawyer at Mannheimer Swartling.
Britain’s media regulator Ofcom welcomed xAI’s decision but said its investigation into Grok remains open. “Our formal investigation remains ongoing,” a spokesperson said, adding that regulators are seeking answers on what went wrong and how safeguards will be strengthened.

PRESSURE FOR STRONGER ENFORCEMENT
Earlier this month, Grok generated hyper-realistic images of women on X that appeared to digitally “undress” them or place them in degrading scenarios, including some involving minors. Until midweek, Reuters testing found the chatbot could still generate sexualized images privately on request. xAI said it is now blocking such outputs in “jurisdictions where it’s illegal,” without specifying which ones.
Malaysia and Indonesia have imposed temporary bans on Grok, while regulators in the UK, France and Italy launched probes. At the EU level, lawmakers say tougher enforcement is still needed. Christian Democrat MEP Nina Carberry called xAI’s changes a “positive step” but said stronger action under the Digital Services Act is required to stop platforms from sexualizing women and children. A European Commission spokesperson said the bloc would use the DSA’s full enforcement powers if the changes prove ineffective.
Under the UK’s Online Safety Act, sharing intimate images without consent—including AI-generated deepfakes—is a priority offence, said Alexander Brown, a lawyer at Simmons & Simmons. Ofcom can fine companies up to 10% of global revenue or seek court orders to block services in severe cases.
For victims, however, legal remedies remain burdensome. “Taking platforms to court is a really difficult and heavy process,” said Anders Bergsten, another Mannheimer Swartling lawyer, pointing to the emotional toll on those affected.
Deepfakes predate today’s AI boom but were once confined to fringe corners of the internet. Grok’s integration with X gives them unprecedented reach, said U.S. cyber-harassment lawyer Carrie Goldberg. “The frictionless publishing capability enables the deepfakes to spread at scale,” she said.
The EU’s AI Act currently focuses on transparency rather than outright bans for adult deepfakes, while service suspension under the DSA is considered a last resort. Still, political pressure is mounting. UK Prime Minister Keir Starmer welcomed xAI’s move but warned that free speech does not extend to violating consent. “Young women’s images are not public property,” he said, adding that Britain is prepared to strengthen laws further if needed.