Yazılar

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.

FTC Probes AI Chatbots from Alphabet, Meta, OpenAI and Others

The U.S. Federal Trade Commission (FTC) announced on Thursday that it has launched an inquiry into major providers of AI-powered consumer chatbots, including Alphabet (Google), Meta Platforms, OpenAI, Character.AI, Snap, and xAI.

Focus of the Inquiry

The FTC is demanding details on:

  • How chatbots are tested, measured, and monitored for potential negative impacts.

  • Monetization strategies, including how companies profit from user engagement.

  • Processing of user inputs and the generation of responses.

  • Use of conversation data, and whether it is exploited for advertising, training, or other commercial purposes.

Rising Scrutiny

Generative AI tools have recently drawn criticism following safety scandals:

  • Reuters revealed internal Meta policies that allowed chatbots to engage in romantic conversations with children.

  • OpenAI is facing a lawsuit alleging ChatGPT contributed to a teenager’s suicide.

  • Character.AI is under a separate lawsuit tied to another teen death.

Company Responses

  • Character.AI: said it will cooperate, highlighting new safety features rolled out over the past year.

  • Snap: welcomed the FTC’s focus, saying it supports policies that balance innovation with community protection.

  • Meta: declined to comment.

  • Alphabet, OpenAI, xAI: did not immediately respond.

Bigger Picture

The inquiry reflects Washington’s growing concern over AI risks, especially for children and vulnerable users. Regulators are looking to balance innovation with consumer protection, while lawsuits and scandals raise urgency for stricter oversight.

U.S. DOJ Probes Google Over Licensing Deal with Character.AI

The U.S. Department of Justice is investigating whether Google’s licensing deal with AI startup Character.AI violated antitrust laws, according to a report by Bloomberg Law. The probe focuses on whether the deal was deliberately structured to sidestep formal merger review processes.


Key Points:

  • Nature of the Deal: In 2023, Google secured a non-exclusive license to Character.AI’s large language model (LLM) technology and subsequently hired the company’s co-founders, Noam Shazeer and Daniel De Freitas—both former Google engineers.

  • Regulatory Concern: Antitrust officials are questioning if this agreement—despite not involving an acquisition—effectively gave Google undue influence or control over Character.AI’s technology, potentially undermining market competition in the fast-growing generative AI sector.

  • Google’s Response: A spokesperson stated that Google has no ownership stake in Character.AI and that the company remains independent. “We’re always happy to answer any questions from regulators,” the spokesperson said.

  • Ongoing Scrutiny: The probe is at an early stage and may not result in formal action, but it signals heightened regulatory vigilance over AI partnerships. The DOJ can still act if the deal is deemed anti-competitive, even without triggering a formal merger review.

  • Industry Trend: Similar AI talent and technology acquisition strategies have been employed by:

    • Microsoft, which paid $650 million to license Inflection AI’s models and onboard its team.

    • Amazon, which hired Adept’s co-founders and staff in 2023.
      Both deals have also drawn regulatory interest.

  • Broader Context: Google is already facing two major antitrust lawsuits from the DOJ targeting its dominance in search and digital advertising. Earlier this month, the Federal Trade Commission (FTC) supported a proposal requiring Google to share its search data with rivals.


Strategic Implications:

The inquiry reflects regulators’ growing concern that Big Tech may be circumventing antitrust oversight through creative structuring of AI-related partnerships. As companies compete to lead in generative AI, expect increased scrutiny on licensing, hiring, and technology transfer deals that could entrench market power.