Yazılar

Facebook removes page accused of harassing ICE agents after DOJ request

Meta Platforms has taken down a Facebook page that the U.S. Department of Justice said was being used to harass Immigration and Customs Enforcement (ICE) agents operating in Chicago, officials confirmed on Tuesday.

In a post on X (formerly Twitter), Attorney General Pam Bondi said the page was part of an effort to “dox and target” roughly 200 ICE officers deployed as part of President Donald Trump’s immigration enforcement campaign. Doxxing refers to the practice of publishing private information about individuals online, often to encourage harassment.

A Meta spokesperson confirmed the page’s removal, saying it violated Facebook’s policies against coordinated harm. The Justice Department did not provide further details, and Reuters was unable to access or review the page before it was taken down.

The takedown follows broader efforts by the Trump administration to clamp down on digital tools tracking ICE operations. Earlier this month, Apple and Google removed apps that allowed users to monitor ICE agent movements, following government pressure and threats of legal action against developers.

ICE has played a central role in Trump’s hardline immigration policy, carrying out frequent raids and arrests that have drawn criticism from human rights advocates. The administration, however, has accused left-wing activists of harassing and obstructing federal officers.

The decision also comes amid Meta’s attempts to repair its relationship with the Trump administration, following past clashes over content moderation and account suspensions. The company recently contributed $1 million to Trump’s inaugural fund and settled a lawsuit over his banned accounts for $25 million.

Meta introduces PG-13-style filters on Instagram to protect teen users

Meta Platforms has unveiled new PG-13-style content filters on Instagram, limiting what users under 18 can see as part of a broader effort to strengthen teen safety online. The update, modeled after the Motion Picture Association’s movie ratings, will automatically restrict access to posts featuring strong language, risky stunts, drug references, or other mature content, Meta said on Tuesday.

The new rules also extend to Meta’s generative AI tools, which will now be subject to similar content guidelines. Teen accounts will be automatically placed under PG-13 settings, though parents can apply stricter limits and adjust screen-time controls using a “limited content” mode.

The move comes amid growing criticism and legal scrutiny over Meta’s handling of youth safety. The company faces hundreds of lawsuits from parents and school districts accusing it of enabling addictive behavior and exposing minors to harmful material.

A Reuters investigation earlier revealed that some of Meta’s existing safety measures were ineffective or inconsistent, while advocacy groups accused Instagram of failing to protect teens from psychological harm.

“We hope this update reassures parents,” Meta said in a blog post. “We know teens may try to avoid these restrictions, which is why we’ll use age prediction technology to ensure appropriate protections even when users misreport their age.”

The new safeguards will roll out in the U.S., UK, Australia, and Canada by year-end and will later expand globally. Meta said similar protections will soon be added to Facebook as regulators tighten oversight of social media and AI systems interacting with minors.

New York City sues tech giants for allegedly fueling youth mental health crisis

New York City has filed a sweeping federal lawsuit against Meta, Google, Snap, TikTok, and ByteDance, accusing them of addicting children to social media and worsening a mental health crisis among young users. The 327-page complaint, lodged in Manhattan federal court, seeks damages for gross negligence and public nuisance, alleging that platforms like Instagram, YouTube, Snapchat, and TikTok were deliberately engineered to exploit the psychology of youth for profit.

The lawsuit claims the companies’ products have contributed to rising rates of depression, sleep deprivation, and chronic absenteeism among minors. According to the city’s data, more than 77% of New York City high school students spend over three hours daily on screens, and 82% of girls report similar habits.

New York’s health commissioner declared social media a public health hazard earlier this year, citing growing taxpayer burdens to combat mental health challenges in schools. The city also linked compulsive platform use to dangerous behaviors such as “subway surfing,” which has caused at least 16 deaths since 2023.

The case joins over 2,000 similar lawsuits filed nationwide, now consolidated in federal court in Oakland, California. A spokesperson for Google rejected the allegations, saying YouTube is a streaming platform rather than a social network. Other defendants have not yet commented.

The city argues that the companies must be held accountable for the harm caused by their algorithms, which it says have created a costly and deadly youth mental health epidemic.