Yazılar

Instagram Adds Teen Safety Alerts

Instagram will begin notifying parents when teenagers repeatedly search for content related to self-harm or suicide within a short timeframe.

The feature will be available through the platform’s optional supervision settings, allowing guardians to receive alerts about concerning search patterns.

The update comes as several governments consider stricter measures to protect young users online, including potential restrictions on access to social media for minors.

The platform said the alerts build on existing safeguards that block harmful searches and redirect users to support resources.

The move reflects growing global pressure on digital platforms to strengthen safety mechanisms for younger audiences.

Internal Meta Study Finds Instagram Shows More “Eating Disorder-Adjacent” Content to Vulnerable Teens

An internal Meta study reviewed by Reuters has revealed that teenagers who report feeling bad about their bodies after using Instagram are shown significantly more “eating disorder-adjacent” content than their peers. The internal document, marked “Do not distribute,” highlights serious concerns about how Instagram’s recommendation system interacts with vulnerable teens.

Meta researchers surveyed 1,149 teenagers throughout the 2023–2024 school year and analyzed the posts appearing in their feeds. Among the 223 teens who said Instagram regularly made them feel worse about their bodies, 10.5% of their feed contained body-focused or disordered-eating-related content — three times higher than the 3.3% seen by other teens. The flagged posts prominently displayed body parts, expressed judgment about physical appearance, or included material associated with negative body image.

Researchers also found that these same teens encountered more “mature” and “provocative” content overall — material involving risk-taking, suffering, and cruelty — which made up 27% of their feed compared to 13.6% for others. While the study could not prove Instagram directly worsens self-esteem, the correlation raised alarms among Meta’s internal experts.

Meta spokesperson Andy Stone said the findings show the company’s commitment to making platforms safer for young people. However, the report revealed that Meta’s moderation tools failed to detect 98.5% of sensitive content potentially inappropriate for teens. Pediatric experts like Jenny Radesky from the University of Michigan called the results “deeply disturbing,” warning that Instagram’s algorithm may be “profiling vulnerable teens and feeding them more harmful content.”

The findings come as Meta faces ongoing lawsuits and investigations in the United States over its alleged failure to protect minors and the mental health risks tied to Instagram’s design.

Meta introduces PG-13-style filters on Instagram to protect teen users

Meta Platforms has unveiled new PG-13-style content filters on Instagram, limiting what users under 18 can see as part of a broader effort to strengthen teen safety online. The update, modeled after the Motion Picture Association’s movie ratings, will automatically restrict access to posts featuring strong language, risky stunts, drug references, or other mature content, Meta said on Tuesday.

The new rules also extend to Meta’s generative AI tools, which will now be subject to similar content guidelines. Teen accounts will be automatically placed under PG-13 settings, though parents can apply stricter limits and adjust screen-time controls using a “limited content” mode.

The move comes amid growing criticism and legal scrutiny over Meta’s handling of youth safety. The company faces hundreds of lawsuits from parents and school districts accusing it of enabling addictive behavior and exposing minors to harmful material.

A Reuters investigation earlier revealed that some of Meta’s existing safety measures were ineffective or inconsistent, while advocacy groups accused Instagram of failing to protect teens from psychological harm.

“We hope this update reassures parents,” Meta said in a blog post. “We know teens may try to avoid these restrictions, which is why we’ll use age prediction technology to ensure appropriate protections even when users misreport their age.”

The new safeguards will roll out in the U.S., UK, Australia, and Canada by year-end and will later expand globally. Meta said similar protections will soon be added to Facebook as regulators tighten oversight of social media and AI systems interacting with minors.