Yazılar

Singapore unveils new law empowering online safety commission to block harmful content

Singapore will establish a new online safety commission with authority to compel social media platforms and internet providers to block harmful online content, under a bill tabled in parliament on Wednesday.

The proposed law follows research by the Infocomm Media Development Authority (IMDA) in February, which found that more than half of verified user complaints about online harms — including child abuse, cyberbullying, and harassment — were not promptly addressed by major platforms.

The commission, which is expected to be operational by mid-2026, will have powers to order platforms to restrict or remove harmful content, ban perpetrators, and grant victims a right to reply. It will also be able to direct internet service providers to block access to harmful web pages or entire platforms within Singapore.

The new agency will oversee cases of doxxing, stalking, abuse of intimate images, and child exploitation, with further powers to target non-consensual data disclosures and incitement of enmity added in later phases.

The bill will be debated in the next parliamentary session. Minister for Digital Development and Information Josephine Teo said the initiative aims to address the persistent failure of online platforms to act on harmful content. “More often than not, platforms fail to take action to remove genuinely harmful content reported to them by victims,” Teo said.

The move expands Singapore’s regulatory oversight following the Online Criminal Harms Act, which took effect in February 2024. Under that law, the Home Affairs Ministry previously threatened Meta with fines of up to S$1 million ($771,664) for failing to combat impersonation scams on Facebook.

Meta introduces PG-13-style filters on Instagram to protect teen users

Meta Platforms has unveiled new PG-13-style content filters on Instagram, limiting what users under 18 can see as part of a broader effort to strengthen teen safety online. The update, modeled after the Motion Picture Association’s movie ratings, will automatically restrict access to posts featuring strong language, risky stunts, drug references, or other mature content, Meta said on Tuesday.

The new rules also extend to Meta’s generative AI tools, which will now be subject to similar content guidelines. Teen accounts will be automatically placed under PG-13 settings, though parents can apply stricter limits and adjust screen-time controls using a “limited content” mode.

The move comes amid growing criticism and legal scrutiny over Meta’s handling of youth safety. The company faces hundreds of lawsuits from parents and school districts accusing it of enabling addictive behavior and exposing minors to harmful material.

A Reuters investigation earlier revealed that some of Meta’s existing safety measures were ineffective or inconsistent, while advocacy groups accused Instagram of failing to protect teens from psychological harm.

“We hope this update reassures parents,” Meta said in a blog post. “We know teens may try to avoid these restrictions, which is why we’ll use age prediction technology to ensure appropriate protections even when users misreport their age.”

The new safeguards will roll out in the U.S., UK, Australia, and Canada by year-end and will later expand globally. Meta said similar protections will soon be added to Facebook as regulators tighten oversight of social media and AI systems interacting with minors.

Brazilian Police Bust Deepfake Scam Using Gisele Bündchen’s Image in Instagram Ads

Brazilian authorities have dismantled a nationwide fraud network that used deepfake videos of supermodel Gisele Bündchen and other celebrities in Instagram ads to trick victims into buying fake products, marking one of the country’s first major crackdowns on AI-powered online scams.

Police arrested four suspects this week and froze assets across five states, after investigators traced more than 20 million reais ($3.9 million) in suspicious transactions uncovered by Brazil’s anti–money laundering agency COAF.

The investigation began in August 2024, when a victim reported being deceived by an Instagram ad showing an AI-generated video of Bündchen promoting a nonexistent skincare product. Another fraudulent campaign featured the supermodel supposedly offering free suitcases, with users asked to pay only for shipping—items that never arrived.

According to Eibert Moreira Neto, head of the cybercrime unit in Rio Grande do Sul, the group created a “series of scams” using deepfakes of multiple celebrities and fake betting platforms. Investigators believe the criminals operated at mass scale, collecting many small payments—usually under 100 reais ($19)—from victims who rarely reported the losses.

“That created a perverse situation,” explained investigator Isadora Galian. “The criminals enjoyed a kind of statistical immunity—they knew most people would not complain, so they operated without fear.”

Meta, owner of Instagram, said its policies ban ads that deceptively use public figures and that such content is removed “when detected.” The company added that it uses AI-based detection systems, trained review teams, and reporting tools to fight celebrity-impersonation scams.

A spokesperson for Bündchen’s team urged consumers to verify suspicious offers, avoid ads promising unrealistic discounts or giveaways, and report fraudulent content to authorities or official brand channels.

The case has broader implications for Brazil’s fight against digital deception. In June 2024, the Supreme Court ruled that social media platforms can be held liable for criminal ads if they fail to remove them swiftly—even without a court order.

The Rio Grande do Sul operation underscores the growing criminal use of deepfake technology, which allows scammers to replicate celebrity likenesses with stunning realism. What once required Hollywood budgets can now be done with cheap AI tools and a few clicks—a reality that’s forcing regulators, platforms, and the public to confront a new era of synthetic fraud.