Yazılar

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.

Australia’s eSafety Commissioner Criticizes YouTube, Apple for Failing to Address Child Abuse Material

Australia’s internet safety regulator, the eSafety Commissioner, released a report on Wednesday accusing major social media platforms, notably YouTube and Apple, of “turning a blind eye” to online child sexual abuse material (CSAM). The watchdog highlighted YouTube’s unresponsiveness to inquiries and its failure to track user reports and response times related to CSAM.

The report found that YouTube, along with Apple, could not provide data on the number of user reports about child abuse content or the speed of their responses. The Australian government recently decided to include YouTube in its groundbreaking ban on social media use for teenagers, reversing an earlier exemption based on the Commissioner’s advice.

Julie Inman Grant, eSafety Commissioner, stated that these companies fail to prioritize child protection and are allowing serious crimes to occur unchecked on their platforms. She emphasized that no other consumer-facing industry would be permitted to operate while enabling such crimes.

In response, a Google spokesperson clarified that eSafety’s criticisms were based on reporting metrics rather than overall safety performance, noting that YouTube proactively removes over 99% of abuse content before it is flagged or viewed.

The report also assessed other platforms, including Meta (Facebook, Instagram, Threads), Apple, Discord, Microsoft, Skype, Snap, and WhatsApp, finding “safety deficiencies” such as failures to detect or block livestreaming of abuse content, inadequate reporting mechanisms, and inconsistent use of hash-matching technology to identify known abuse images.

Despite warnings in prior years, some companies have not sufficiently addressed these gaps. The report specifically noted that Apple and YouTube did not disclose how many trust and safety staff they employ or detailed information about user reports on child abuse content.

Australia’s Teen Social Media Ban Trial Finds Age-Checking Software Can Work

Organizers of the world’s largest trial of age assurance technology say that software-based methods to enforce Australia’s upcoming ban on under-16s using social media are feasible, despite some limitations. The government-commissioned Age Assurance Technology Trial involved over 1,000 Australian school students and hundreds of adults.

Starting this December, companies such as Meta (owner of Facebook and Instagram), Snapchat, and TikTok must demonstrate they take reasonable steps to block users under 16 or face fines up to A$49.5 million (approximately $32 million). This makes Australia the first country to implement such a ban.

Concerns have been raised by child protection advocates, tech groups, and youths about the enforceability of the ban, citing methods like Virtual Private Networks (VPNs) that mask users’ locations.

Tony Allen, CEO of the UK-based Age Check Certification Scheme overseeing the trial, stated, “Age assurance can be done in Australia privately, efficiently and effectively.” The trial concluded there are “no significant tech barriers” to deploying such software, though no single solution works perfectly in all cases.

Allen also highlighted risks around data privacy, noting that some firms may over-collect data beyond what regulators or law enforcement would require in the future.

While detailed data and product names were not disclosed, a final report will be submitted to the government next month to guide upcoming industry consultations before the December enforcement deadline.

The office of Australia’s eSafety Commissioner commented that preliminary results indicate age assurance tech, if used properly alongside other methods, can be “private, robust and effective.”

Australia’s approach is being closely monitored internationally as other governments consider measures to protect children from social media exposure.