Yazılar

California AG Orders Musk’s xAI to Stop Generating Sexual Deepfake Images

California Attorney General Rob Bonta has sent a cease-and-desist letter to xAI, demanding the immediate halt of the creation and distribution of non-consensual sexual images generated by its AI chatbot Grok.

“I fully expect xAI to immediately comply,” Bonta said in a statement on Friday.

The action follows a growing global backlash against Grok, which has allowed users to create and share sexualized images of women and minors. Authorities in multiple countries have moved to investigate or restrict the tool over concerns about illegal and harmful content.

Bonta’s office said it opened a formal investigation on Wednesday into the creation and spread of non-consensual, sexually explicit material produced using Grok. The probe adds regulatory pressure on xAI, which is owned by billionaire entrepreneur Elon Musk.

xAI said late on Wednesday that it had introduced new restrictions limiting image-editing capabilities for all Grok users, though regulators say concerns remain. The company did not respond to a Reuters request for comment on the cease-and-desist letter.

International scrutiny has intensified in parallel. Authorities in Japan, Canada and Britain have opened probes into Grok, while Malaysia and Indonesia have temporarily blocked access to the chatbot over the generation of explicit images.

California’s move underscores a broader shift by regulators toward holding AI developers accountable for how generative tools are used—and misused—particularly when it comes to non-consensual and sexualized content. The case could set an important precedent for how aggressively governments intervene as generative AI systems become more powerful and widely deployed.

Indonesia temporarily blocks access to Grok over sexualised images

Indonesia has temporarily blocked access to Grok, the artificial intelligence chatbot developed by xAI, citing concerns over the risk of AI-generated pornographic and sexualised content. The decision makes Indonesia the first country to formally deny access to the tool.

The move comes amid growing international backlash, with governments and regulators across Europe and Asia condemning Grok over its role in generating and spreading sexualised images online, including non-consensual content.

xAI said on Thursday that it had begun restricting Grok’s image generation and editing features to paying subscribers, after safeguard failures allowed the production of sexualised outputs, including depictions involving minors. The company said the changes were part of efforts to tighten controls and prevent misuse.

Indonesia’s Communications and Digital Minister Meutya Hafid said the government viewed non-consensual sexual deepfakes as a serious violation of human rights and digital safety.
“The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space,” Hafid said in a statement.

The ministry has also summoned officials from X, where Grok is embedded, to discuss the issue and clarify what measures are being taken to prevent further abuse.

Elon Musk said on X that anyone using Grok to create illegal content would face the same consequences as users who upload such material directly. When contacted by Reuters, xAI responded with what appeared to be an automated message stating, “Legacy Media Lies.” X did not immediately respond to a request for comment.

Indonesia, home to the world’s largest Muslim population, enforces strict laws banning the online distribution of content deemed obscene. Authorities said the temporary block would remain in place while regulators assess compliance and safeguards surrounding the AI tool.

Musk’s AI bot Grok limits some image generation on X after backlash

Elon Musk’s artificial intelligence startup xAI has introduced new restrictions on the image generation and editing functions of its chatbot Grok on social media platform X, following widespread criticism over the creation and sharing of sexualized images.

Previously, users could prompt Grok directly on X to alter photos of people — including digitally removing clothing or placing individuals in sexualized poses — often without their consent. The chatbot would then automatically publish the altered images in replies on the platform.

On Friday, Grok informed users that its image generation and editing features were now limited to paying subscribers. The change appeared to stop Grok from generating and automatically posting such images in response to public posts or comments on X.

However, users could still create sexualized images by interacting with Grok through its dedicated tab within X and then manually posting the images themselves. The standalone Grok app, which operates separately from X, was also still allowing image generation without a subscription.

When contacted by Reuters for comment, xAI responded with an automated message stating “Legacy Media Lies.” X did not immediately respond to requests for comment. Musk said last week that anyone using Grok to generate illegal content would face the same consequences as if they had uploaded such material directly.

In a test conducted by a Reuters reporter on Friday, Grok declined a request to alter an image, replying that the image editing feature was only available to paying subscribers.

The European Commission said the restrictions did not address its core concerns, stressing that limiting access to subscribers does not resolve the underlying issue. A Commission spokesperson said regulators did not want to see such images at all, regardless of whether they were generated by paid or unpaid users.

Other governments and regulators have also condemned the explicit content generated by Grok, with some launching investigations into potential legal violations. Germany’s media minister Wolfram Weimer described the wave of semi-nude images as the “industrialisation of sexual harassment,” adding to mounting international pressure on X and xAI to demonstrate stronger safeguards against abuse.