Yazılar

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.

Dutch Regulator Warns Voters Against Using AI Chatbots for Election Guidance

The Dutch Data Protection Authority (AP) has urged voters not to rely on AI chatbots for election advice, warning that the systems deliver unreliable and biased recommendations ahead of the October 29 national election. The regulator found that chatbots frequently directed users toward just two major political parties — the far-right Freedom Party (PVV) and the Labour-Green Left coalition — despite the Netherlands’ highly fragmented political landscape.

According to the AP’s tests, chatbots advised users to vote for one of those two blocs in 56% of cases, even when provided with the campaign programs of smaller parties. “Chatbots may seem like clever tools, but as a voting aid, they consistently fail,” said Monique Verdier, the watchdog’s vice-chair, adding that their internal operations are “unclear and difficult to verify.”

While the watchdog did not identify the four chatbots tested, it warned that their underlying algorithms may be inadvertently promoting political polarization by amplifying dominant parties on opposite ends of the spectrum. Current polls predict around 20% support for the Freedom Party and 16% for the Labour-Green Left coalition, highlighting their strong influence in public discourse.

The Dutch election follows the collapse of a right-wing coalition earlier this year, leaving the country under a caretaker government and setting the stage for a contest between conservative and centrist forces. Although it is unclear how many citizens are turning to AI tools for political guidance, the regulator said their use is “growing,” noting that more than 13 million voters are eligible to participate.

AI chatbots reshape India’s $283 billion IT industry, threatening call-center jobs

In bustling offices across India, artificial intelligence chatbots are taking over the headsets once worn by millions of call-center workers. Startups like LimeChat are leading the charge, building generative AI systems that can handle customer inquiries with human-like fluency — and at a fraction of the cost.

LimeChat claims its chatbots can reduce the number of human agents needed to manage 10,000 monthly customer queries by up to 80%. “Once you hire a LimeChat agent, you never have to hire again,” said co-founder Nikhil Gupta, whose company has already automated thousands of jobs and now handles 70% of customer complaints for clients.

This rapid shift marks a turning point for India’s $283 billion IT and business process outsourcing sector, which employs 1.65 million people in call centers, data management, and payroll. While India became the world’s “back office” thanks to cheap labor and English proficiency, automation now threatens that foundation.

Despite concerns over job losses, the government is embracing AI’s potential. Prime Minister Narendra Modi insists that “work does not disappear due to technology — it changes,” even as hiring growth in the sector slows sharply. Analysts warn that AI could cut call-center revenues by 50% in the next five years.

Yet, not everyone is losing. Startups like Haptik, acquired by Reliance, and LimeChat are thriving. Haptik says its AI agents cost as little as $120 per month and can cut support costs by 30%. Meanwhile, training centers in Hyderabad’s Ameerpet district have pivoted from teaching Java to AI and prompt engineering to prepare students for a new era of work.

The outcome of India’s AI gamble could shape how developing economies balance automation and employment — a test of whether embracing disruption will create prosperity or deepen inequality.