Yazılar

Singapore unveils new law empowering online safety commission to block harmful content

Singapore will establish a new online safety commission with authority to compel social media platforms and internet providers to block harmful online content, under a bill tabled in parliament on Wednesday.

The proposed law follows research by the Infocomm Media Development Authority (IMDA) in February, which found that more than half of verified user complaints about online harms — including child abuse, cyberbullying, and harassment — were not promptly addressed by major platforms.

The commission, which is expected to be operational by mid-2026, will have powers to order platforms to restrict or remove harmful content, ban perpetrators, and grant victims a right to reply. It will also be able to direct internet service providers to block access to harmful web pages or entire platforms within Singapore.

The new agency will oversee cases of doxxing, stalking, abuse of intimate images, and child exploitation, with further powers to target non-consensual data disclosures and incitement of enmity added in later phases.

The bill will be debated in the next parliamentary session. Minister for Digital Development and Information Josephine Teo said the initiative aims to address the persistent failure of online platforms to act on harmful content. “More often than not, platforms fail to take action to remove genuinely harmful content reported to them by victims,” Teo said.

The move expands Singapore’s regulatory oversight following the Online Criminal Harms Act, which took effect in February 2024. Under that law, the Home Affairs Ministry previously threatened Meta with fines of up to S$1 million ($771,664) for failing to combat impersonation scams on Facebook.

Australia’s Teen Social Media Ban Praised at UN

Australian Prime Minister Anthony Albanese promoted his government’s world-first ban on social media for teens under 16 during an event in New York, calling the move a necessary step to address the “constantly evolving” risks digital platforms pose for children.

The law, which takes effect in December, makes Australia the first country to prohibit those under 16 from creating social media accounts. Instead of blanket age verification, the government wants platforms to use artificial intelligence and behavioral data to estimate user ages.

“It isn’t foolproof, but it is a crucial step in the right direction,” Albanese said at the Protecting Children in the Digital Age event on the sidelines of the UN General Assembly.

European Commission President Ursula von der Leyen praised the measure, saying she was “inspired by Australia’s example” and that Europe would be “watching and learning” as it considers its own policies.

Australia’s center-left government introduced the law citing research linking excessive social media use among young teens to mental health issues, bullying, misinformation, and harmful body image content. The minimum age for accounts will rise from 13 to 16.

Albanese framed the law as both sensible and overdue, saying it would give teens “three more years of being shaped by real-life experience, not algorithms.”

Australia’s Teen Social Media Ban Faces a New Wildcard: Teenagers

Australia is preparing to implement the world’s first national social media ban for users under 16, but new challenges have emerged from the very group the law aims to protect: teenagers themselves.

Thirteen-year-old Jasmine Elkin from Perth recently tested five different photo-based age verification software products, alongside about 30 other students. While impressed by some systems’ ability to estimate age to the exact month, Elkin doubts the ban’s effectiveness, noting that young users could easily bypass it by asking older siblings to take verification photos.

This concern reflects a broader worry shared by child protection advocates, tech companies, and trial organizers: the technology works, but young people are highly skilled at finding workarounds.

Starting in December, major social media platforms such as Facebook, Instagram, Snapchat, and TikTok will face fines up to A$49.5 million ($32.17 million) if they fail to take “reasonable steps” to prevent users under 16 from accessing their services. Currently, these platforms require users to be at least 13 to create accounts.

How well Australia’s ban succeeds may influence other countries. Britain, France, and Singapore are pursuing similar restrictions, and several U.S. states, including Florida, are challenging free speech laws to impose age limits. Elon Musk, owner of X (formerly Twitter), has criticized the Australian law and regulatory authority, calling it a “censorship commissar.”

Trial organizers say nearly 60 products were considered, with about a dozen tested by teenagers in May. The teenagers demonstrated fast tech skills, leading organizers to increase the number of products tested and shorten testing times. The software mainly used selfies to estimate age, as other methods—such as credit card checks—were impractical for teens, and hand-gesture recognition gave imprecise age estimates near the 16-year cutoff.

The trial’s detailed results will be presented on June 20, with a full report to the government expected by the end of July. This will inform the eSafety Commissioner’s recommendations. The government has cited risks from cyberbullying, harmful body image content, and misogyny as reasons for the law.

Despite the technology’s promise, uncertainties remain about how effective it needs to be and whether it can keep pace with teenagers’ ingenuity. Some trial participants said they would find ways around blocks, while others accepted it as a step toward safer online environments.

Communications Minister Anika Wells’s spokesperson emphasized that age restrictions are “not the end-all be-all” but a positive move to protect young people online.