Yazılar

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.

Right-Wing Media Figures and AI Pioneers Unite to Call for Superintelligent AI Ban

A coalition of U.S. right-wing media figures and AI pioneers has issued a joint statement urging a global ban on developing superintelligent artificial intelligence, warning that progress toward machines exceeding human cognition must halt until society can ensure safety and democratic oversight.

The initiative, announced Wednesday by the Future of Life Institute (FLI), includes signatures from Steve Bannon, Glenn Beck, and tech luminaries Geoffrey Hinton and Yoshua Bengio—two of the so-called “godfathers of AI.” The non-profit, founded in 2014 and initially supported by Elon Musk and tech investor Jaan Tallinn, has long advocated for responsible AI development and limits on advanced machine intelligence.

The statement calls for governments worldwide to prohibit the creation of AI systems capable of surpassing human intelligence until “science shows a safe way forward” and “the public demands it.” It argues that current AI development races are reckless and could produce technologies that threaten human autonomy, stability, and safety.

The unusual alliance between conservative media figures and leading scientists highlights the broadening political and cultural anxiety surrounding AI’s rapid evolution. It also reflects growing skepticism on the populist right, where some commentators have warned that unchecked AI could concentrate power in corporate and political elites.

While many in the technology industry and the U.S. government have dismissed calls for AI moratoriums as harmful to innovation and economic competitiveness, the involvement of influential figures like Bannon, Beck, and Apple co-founder Steve Wozniak could amplify public debate. Other signatories include former Irish President Mary Robinson and Virgin Group founder Richard Branson.

Supporters of the ban say the move is not anti-technology but a precautionary measure. “The race to build superintelligent AI must not outpace our ability to control it,” said an FLI spokesperson. “Without democratic input and safety guarantees, the risks are existential.”

The statement follows a broader series of warnings from experts and public figures, including Musk and OpenAI co-founder Sam Altman, who have both urged the creation of global AI safety frameworks.

OpenAI to permit mature content on ChatGPT for verified adults from December

OpenAI will begin allowing mature content on ChatGPT starting in December for users who verify their age, CEO Sam Altman announced on Tuesday. The decision marks a major policy shift under OpenAI’s new “treat adult users like adults” principle, following earlier restrictions that limited the chatbot’s ability to handle sensitive topics.

Altman said on X (formerly Twitter) that the company made ChatGPT “pretty restrictive” to avoid harm to users experiencing mental distress, which he acknowledged had made the chatbot “less useful or enjoyable” for others. “As we roll out age-gating more fully … we will allow even more, like erotica for verified adults,” he said.

The move comes as OpenAI develops new safety tools and moderation systems aimed at identifying mental health risks and ensuring appropriate usage. Altman added that the company now feels confident it can safely relax restrictions for most adult users while maintaining strong protections for minors.

In parallel, OpenAI plans to roll out a customization feature that lets users adjust ChatGPT’s tone and personality, including more expressive or conversational styles. “If you want ChatGPT to act more human-like or friendly, it should — but only if you want it,” Altman said.

The announcement came the same day Meta introduced new PG-13-style content filters on Instagram, underscoring the growing trend among tech firms to tailor content standards by user age and consent verification.