Yazılar

Singapore unveils new law empowering online safety commission to block harmful content

Singapore will establish a new online safety commission with authority to compel social media platforms and internet providers to block harmful online content, under a bill tabled in parliament on Wednesday.

The proposed law follows research by the Infocomm Media Development Authority (IMDA) in February, which found that more than half of verified user complaints about online harms — including child abuse, cyberbullying, and harassment — were not promptly addressed by major platforms.

The commission, which is expected to be operational by mid-2026, will have powers to order platforms to restrict or remove harmful content, ban perpetrators, and grant victims a right to reply. It will also be able to direct internet service providers to block access to harmful web pages or entire platforms within Singapore.

The new agency will oversee cases of doxxing, stalking, abuse of intimate images, and child exploitation, with further powers to target non-consensual data disclosures and incitement of enmity added in later phases.

The bill will be debated in the next parliamentary session. Minister for Digital Development and Information Josephine Teo said the initiative aims to address the persistent failure of online platforms to act on harmful content. “More often than not, platforms fail to take action to remove genuinely harmful content reported to them by victims,” Teo said.

The move expands Singapore’s regulatory oversight following the Online Criminal Harms Act, which took effect in February 2024. Under that law, the Home Affairs Ministry previously threatened Meta with fines of up to S$1 million ($771,664) for failing to combat impersonation scams on Facebook.

TikTok Collected Sensitive Data on Canadian Children, Probe Reveals

TikTok has pledged to strengthen safeguards to keep children off its platform after a Canadian investigation concluded that the company failed to adequately block underage users and protect their personal information.

The inquiry, led by Canada’s federal privacy commissioner Philippe Dufresne along with privacy watchdogs in Quebec, British Columbia, and Alberta, found that hundreds of thousands of Canadian children used TikTok annually despite the platform’s minimum age requirement of 13.

Investigators also determined that TikTok collected sensitive personal data from “a large number” of children and used it for marketing and content-targeting purposes. “TikTok collects vast amounts of personal information about its users, including children. This data is being used to target the content and ads that users see, which can have harmful impacts, particularly on youth,” Dufresne said at a press conference.

In response, TikTok agreed to adopt stricter age-verification systems, improve transparency about how user data is used, and prevent advertisers from directly targeting anyone under 18, except through broad categories such as language or approximate location. The company also expanded the privacy information available to Canadian users.

A TikTok spokesperson said the company was pleased regulators accepted several of its proposals to “further strengthen” protections for Canadian users, while noting disagreement with some of the findings. The spokesperson did not specify which ones.

The case comes amid growing global scrutiny of TikTok due to concerns about its ties to China. TikTok is owned by Beijing-based ByteDance, and governments worldwide—including the EU and the U.S.—have taken steps to restrict or ban the app on official devices.

In Canada, the government launched a review of TikTok’s planned expansion in 2023, which ultimately led to an order demanding the company shut down its Canadian operations over national security risks. TikTok is challenging that order.

Musk’s X Fined in Canada Over Failure to Remove Intimate Image

Elon Musk’s social media platform X has been fined C$100,000 ($72,307) by a Canadian tribunal for failing to remove a non-consensual intimate image, marking the first such penalty against an internet intermediary under British Columbia’s Intimate Image Protection Act.

Case Background

  • The Civil Resolution Tribunal first ruled in March that X must delete and remove the image of a woman identified as “TR”.

  • Instead of removing it, X geofenced the content, blocking it in Canada but keeping it visible worldwide.

  • Tribunal Vice Chair Eric Regehr rejected X’s argument that it lacked authority outside British Columbia, stating the order was straightforward: remove the image.

Tribunal’s Decision

  • Regehr said X’s partial compliance left the victim exposed:

    “She lives in the knowledge that the vast majority of the world’s population can still see the intimate image on X.”

  • The fine imposed was the maximum allowed, with the option for the woman to request additional daily penalties of up to C$5,000 if noncompliance continues.

  • Compensation for the woman’s time was denied, partly due to AI-generated errors in her submissions.

Broader Implications

  • The ruling highlights growing global pressure on platforms like X to act against abusive and exploitative content.

  • British Columbia’s Ministry of Attorney General said it expects X to comply and pay fines, stressing it does not anticipate difficulties in enforcement.

  • X and its legal counsel did not respond to Reuters’ requests for comment.