Yazılar

France’s lower house backs social media ban for those under 15 years old

France’s National Assembly has approved legislation that would ban children under the age of 15 from accessing social media platforms, reflecting growing concern over online bullying and the impact of digital environments on young users’ mental health. The decision marks a significant step in France’s efforts to strengthen child protection in the digital sphere.

Lawmakers supporting the bill argue that social media platforms expose minors to harassment, addictive content patterns and psychological pressure at a critical stage of emotional development. The legislation seeks to tighten age verification requirements and place greater responsibility on technology companies to prevent underage access to their services.

The vote comes amid a broader European debate on regulating social media use among minors. Several governments have raised alarms over rising rates of anxiety, depression and cyberbullying linked to excessive screen time and online interaction. French officials say the measure is intended to give families and schools stronger tools to manage children’s digital habits.

The bill now moves to further legislative review before it can become law. If fully approved, the restrictions could significantly change how social media platforms operate in France and how young users engage with online content.

Philippines to Restore Access to Grok After Developer Commits to Safety Fixes

The Philippines will restore access to Grok, an artificial intelligence chatbot developed by xAI, after the company committed to removing image-manipulation features that raised child safety concerns, authorities said on Wednesday.

The Cybercrime Investigation and Coordinating Center said Grok’s developer confirmed the platform would no longer use content manipulation tools. The agency added that it would continue monitoring the chatbot even after access is restored to ensure compliance with Philippine laws and regulations.

The Philippines blocked Grok last week amid concerns that it could generate sexualised images, including content posing potential risks to children. The decision followed similar actions by regulators in multiple regions, as governments stepped up scrutiny of AI tools capable of producing explicit or harmful material.

Authorities said the restoration reflects assurances from the developer that safeguards are being strengthened. The move underscores the growing pressure on AI companies to balance innovation with effective content moderation as regulators worldwide tighten oversight of generative technologies.

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.