Yazılar

Grok AI Floods X With Sexualized Images, Raising Global Alarm

X’s built-in AI chatbot Grok has generated a wave of sexualized images of women — and in some cases minors — after users prompted the tool to digitally alter real photos, a Reuters investigation found.

One victim, Brazilian musician Julie Yukari, said Grok created near-nude images of her after users asked the bot to strip her clothing from a harmless photo. Similar incidents have appeared widely on X, with Reuters documenting dozens of successful requests to place women in highly revealing outfits. Reuters also identified several cases involving sexualized images of children.

The backlash has spread internationally. French ministers said the content was “manifestly illegal” and reported X to prosecutors and regulators, while India’s IT ministry warned the platform had failed to stop the generation of obscene material. U.S. regulators including the Federal Communications Commission and the Federal Trade Commission declined to comment.

Experts said the outcome was foreseeable. AI watchdogs warned last year that Grok’s image tools could easily be abused to create non-consensual deepfakes. “This was entirely predictable and avoidable,” said Dani Pinter of the National Center on Sexual Exploitation, blaming weak safeguards and content moderation.

X owner Elon Musk appeared to mock the controversy by responding with laughing emojis to AI-generated bikini images, including ones depicting himself. xAI, which develops Grok, previously dismissed reports of sexualized images of minors with the statement: “Legacy Media Lies.”

New Zealand Parliament to Debate Ban on Teen Social Media Use

New Zealand lawmakers are preparing to debate a bill that would restrict social media access for children under 16, marking a major step in the country’s push to address online harms among young people. The proposal, introduced by National Party MP Catherine Wedd, would require social media platforms to implement age verification systems similar to Australia’s pioneering legislation passed in 2024.

The bill, first submitted in May, was selected on Thursday for parliamentary consideration through the country’s ceremonial ballot process for members’ bills. While it has backing from the ruling National Party, coalition partners have yet to confirm their support, leaving its passage uncertain.

Prime Minister Christopher Luxon has voiced growing concern about the mental health impact of social media on teenagers, citing issues such as misinformation, cyberbullying, and body image pressure. A parliamentary committee is also studying the wider effects of online harm, with a full report expected in early 2026.

Civil liberties group PILLAR has criticized the proposal, warning that mandatory age checks could endanger privacy and limit online freedoms. Executive Director Nathan Seiuli called the measure “lazy policymaking” that fails to protect children effectively.

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.