Yazılar

Germany Plans New Measures to Curb Harmful AI Image Manipulation

Germany’s justice ministry said on Friday it is preparing measures that would allow authorities to more effectively combat the use of artificial intelligence to manipulate images in ways that violate personal rights.

The move comes amid growing scrutiny in Europe over AI-generated imagery, including investigations into Grok, the built-in chatbot on X owned by billionaire Elon Musk. Grok has faced criticism for its so-called “spicy mode,” which allows users to generate sexually explicit images.

A Reuters investigation found that the chatbot’s image generation tools were being used to create images of women and children in minimal clothing, often without the consent of the individuals depicted. Germany’s media minister earlier this week urged the European Commission to take legal action to halt what he described as the “industrialisation of sexual harassment” on X.

Speaking at a regular government press conference, justice ministry spokesperson Anna-Lena Beckfeld said the government was preparing to address the issue through domestic legal channels.

“It is unacceptable that manipulation on a large scale is being used for systematic violations of personal rights,” Beckfeld said. “We therefore want to ensure that criminal law can be used more effectively to combat this.”

She said the ministry is working on tighter regulation of deepfakes and plans to introduce legislation targeting digital violence, aimed at better supporting victims. The goal, she added, is to make it easier for individuals to take direct action against violations of their rights online.

Beckfeld said concrete proposals would be presented in the near future but declined to provide further details at this stage.

After initially dismissing concerns over Grok’s image-generation features, xAI has since restricted the function to paid subscribers. Musk said last week that anyone using the chatbot to create illegal content would face the same consequences as if they had uploaded such material directly.

Italy Closes Probe Into DeepSeek After Commitments to Warn Users of AI “Hallucination” Risks

Italy’s antitrust authority has closed an investigation into Chinese artificial intelligence system DeepSeek after the company agreed to binding commitments aimed at improving warnings about the risk of AI-generated false information.

The probe, launched last June by Italy’s antitrust and consumer protection authority AGCM, focused on allegations that DeepSeek failed to adequately inform users that its AI system could generate inaccurate, misleading, or fabricated content — commonly referred to as “hallucinations.”

The decision to end the investigation was announced in the AGCM’s weekly bulletin published on Monday. According to the regulator, the commitments were submitted by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, which jointly own and operate the DeepSeek platform.

The agreed measures include clearer and more prominent disclosures explaining the risk that, based on user inputs, the AI model may produce outputs containing incorrect or invented information. The AGCM said the new disclosures are designed to be more transparent, intelligible, and immediately visible to users.

“The commitments presented by DeepSeek make disclosures about the risk of hallucinations easier, more transparent, intelligible, and immediate,” the authority said in its bulletin.

The case highlights growing regulatory scrutiny across Europe over how AI systems communicate their limitations to users, particularly as generative AI tools become more widely adopted in consumer-facing applications.

Denmark Moves to Ban AI Deepfakes, Giving Citizens Copyright Over Their Own Likeness

Denmark is preparing to pass one of the world’s toughest laws against AI-generated deepfakes, aiming to give citizens new legal rights over their appearance, voice, and likeness online. The bill — expected to pass early next year — would make it illegal to share or distribute deepfake content without a person’s consent, extending copyright protections to individuals.

The proposed legislation follows growing concern about the rapid spread of deepfakes — hyper-realistic AI-generated videos, images, or audio that impersonate real people. Danish Culture Minister Jakob Engel-Schmidt said the move is essential to protect both private citizens and democracy itself, warning that political deepfakes could “undermine our democracy” by spreading falsehoods.

Under the new law, Danes would be able to demand takedowns of AI-generated content that misuses their likeness, while parody and satire would remain protected. Major tech platforms that fail to remove harmful deepfakes could face significant fines, although individuals are unlikely to face criminal penalties.

Experts have praised the move as a landmark step. “When people ask, ‘what can I do to protect myself from being deepfaked,’ the answer right now is basically nothing,” said Henry Ajder, a generative AI researcher and founder of Latent Space Advisory. “Denmark is one of the first governments to change that.”

The Danish proposal mirrors similar measures abroad. The United States recently criminalized the sharing of non-consensual intimate deepfakes, while South Korea introduced harsh penalties for deepfake pornography. Denmark’s initiative could now influence European Union policy, with France and Ireland reportedly showing interest in adopting similar laws.

For victims like Marie Watson, a Danish video game streamer whose photos were digitally altered and shared online, the legislation comes too late to undo the damage but offers hope for future protection. “When it’s online, you’re done. You can’t do anything,” she said. “It’s out of your control.”