Germany Plans New Measures to Curb Harmful AI Image Manipulation
Germany’s justice ministry said on Friday it is preparing measures that would allow authorities to more effectively combat the use of artificial intelligence to manipulate images in ways that violate personal rights.
The move comes amid growing scrutiny in Europe over AI-generated imagery, including investigations into Grok, the built-in chatbot on X owned by billionaire Elon Musk. Grok has faced criticism for its so-called “spicy mode,” which allows users to generate sexually explicit images.
A Reuters investigation found that the chatbot’s image generation tools were being used to create images of women and children in minimal clothing, often without the consent of the individuals depicted. Germany’s media minister earlier this week urged the European Commission to take legal action to halt what he described as the “industrialisation of sexual harassment” on X.
Speaking at a regular government press conference, justice ministry spokesperson Anna-Lena Beckfeld said the government was preparing to address the issue through domestic legal channels.
“It is unacceptable that manipulation on a large scale is being used for systematic violations of personal rights,” Beckfeld said. “We therefore want to ensure that criminal law can be used more effectively to combat this.”
She said the ministry is working on tighter regulation of deepfakes and plans to introduce legislation targeting digital violence, aimed at better supporting victims. The goal, she added, is to make it easier for individuals to take direct action against violations of their rights online.
Beckfeld said concrete proposals would be presented in the near future but declined to provide further details at this stage.
After initially dismissing concerns over Grok’s image-generation features, xAI has since restricted the function to paid subscribers. Musk said last week that anyone using the chatbot to create illegal content would face the same consequences as if they had uploaded such material directly.



