Yazılar

Grok AI Floods X With Sexualized Images, Raising Global Alarm

X’s built-in AI chatbot Grok has generated a wave of sexualized images of women — and in some cases minors — after users prompted the tool to digitally alter real photos, a Reuters investigation found.

One victim, Brazilian musician Julie Yukari, said Grok created near-nude images of her after users asked the bot to strip her clothing from a harmless photo. Similar incidents have appeared widely on X, with Reuters documenting dozens of successful requests to place women in highly revealing outfits. Reuters also identified several cases involving sexualized images of children.

The backlash has spread internationally. French ministers said the content was “manifestly illegal” and reported X to prosecutors and regulators, while India’s IT ministry warned the platform had failed to stop the generation of obscene material. U.S. regulators including the Federal Communications Commission and the Federal Trade Commission declined to comment.

Experts said the outcome was foreseeable. AI watchdogs warned last year that Grok’s image tools could easily be abused to create non-consensual deepfakes. “This was entirely predictable and avoidable,” said Dani Pinter of the National Center on Sexual Exploitation, blaming weak safeguards and content moderation.

X owner Elon Musk appeared to mock the controversy by responding with laughing emojis to AI-generated bikini images, including ones depicting himself. xAI, which develops Grok, previously dismissed reports of sexualized images of minors with the statement: “Legacy Media Lies.”

Denmark Moves to Ban AI Deepfakes, Giving Citizens Copyright Over Their Own Likeness

Denmark is preparing to pass one of the world’s toughest laws against AI-generated deepfakes, aiming to give citizens new legal rights over their appearance, voice, and likeness online. The bill — expected to pass early next year — would make it illegal to share or distribute deepfake content without a person’s consent, extending copyright protections to individuals.

The proposed legislation follows growing concern about the rapid spread of deepfakes — hyper-realistic AI-generated videos, images, or audio that impersonate real people. Danish Culture Minister Jakob Engel-Schmidt said the move is essential to protect both private citizens and democracy itself, warning that political deepfakes could “undermine our democracy” by spreading falsehoods.

Under the new law, Danes would be able to demand takedowns of AI-generated content that misuses their likeness, while parody and satire would remain protected. Major tech platforms that fail to remove harmful deepfakes could face significant fines, although individuals are unlikely to face criminal penalties.

Experts have praised the move as a landmark step. “When people ask, ‘what can I do to protect myself from being deepfaked,’ the answer right now is basically nothing,” said Henry Ajder, a generative AI researcher and founder of Latent Space Advisory. “Denmark is one of the first governments to change that.”

The Danish proposal mirrors similar measures abroad. The United States recently criminalized the sharing of non-consensual intimate deepfakes, while South Korea introduced harsh penalties for deepfake pornography. Denmark’s initiative could now influence European Union policy, with France and Ireland reportedly showing interest in adopting similar laws.

For victims like Marie Watson, a Danish video game streamer whose photos were digitally altered and shared online, the legislation comes too late to undo the damage but offers hope for future protection. “When it’s online, you’re done. You can’t do anything,” she said. “It’s out of your control.”

UN Report Calls for Stronger Measures to Detect and Combat AI-Driven Deepfakes

The United Nations’ International Telecommunication Union (ITU) has urged companies to adopt advanced tools to detect and eliminate misinformation and deepfake content, highlighting the growing threats these pose to elections and financial security. The call was made in a report released on Friday during the ITU’s “AI for Good Summit” in Geneva.

Deepfakes—AI-generated images, videos, and audio that convincingly mimic real people—are increasingly used to spread false information, the ITU warned. To tackle this, the report recommended robust standards for combating manipulated multimedia and urged platforms like social media sites to implement digital verification tools to authenticate content before sharing.

Bilel Jamoussi, head of the ITU’s Standardization Bureau’s Study Groups Department, noted that public trust in social media has dropped sharply because users struggle to distinguish truth from fake. Generative AI’s ability to fabricate realistic multimedia makes combating deepfakes a particularly pressing challenge.

Leonard Rosenthol from Adobe, a leading digital editing software company addressing deepfakes since 2019, emphasized the need for content provenance—information about the origin of digital media—to help users judge trustworthiness. “When scrolling feeds, users want to know: ‘Can I trust this image or video?’” he said.

Dr. Farzaneh Badiei, founder of Digital Medusa, a digital governance research firm, stressed the need for a coordinated global response, noting the lack of a single international body focused on detecting manipulated media. She warned that fragmented standards could make harmful deepfakes more effective.

The ITU is developing standards for watermarking videos—which constitute 80% of internet traffic—to embed provenance data such as creator identity and timestamps.

Tomaz Levak, founder of Swiss firm Umanitek, called on the private sector to proactively adopt safety measures and educate users. “AI will become more powerful and faster… We must upskill people to avoid them becoming victims,” he said.