Yazılar

Meta introduces PG-13-style filters on Instagram to protect teen users

Meta Platforms has unveiled new PG-13-style content filters on Instagram, limiting what users under 18 can see as part of a broader effort to strengthen teen safety online. The update, modeled after the Motion Picture Association’s movie ratings, will automatically restrict access to posts featuring strong language, risky stunts, drug references, or other mature content, Meta said on Tuesday.

The new rules also extend to Meta’s generative AI tools, which will now be subject to similar content guidelines. Teen accounts will be automatically placed under PG-13 settings, though parents can apply stricter limits and adjust screen-time controls using a “limited content” mode.

The move comes amid growing criticism and legal scrutiny over Meta’s handling of youth safety. The company faces hundreds of lawsuits from parents and school districts accusing it of enabling addictive behavior and exposing minors to harmful material.

A Reuters investigation earlier revealed that some of Meta’s existing safety measures were ineffective or inconsistent, while advocacy groups accused Instagram of failing to protect teens from psychological harm.

“We hope this update reassures parents,” Meta said in a blog post. “We know teens may try to avoid these restrictions, which is why we’ll use age prediction technology to ensure appropriate protections even when users misreport their age.”

The new safeguards will roll out in the U.S., UK, Australia, and Canada by year-end and will later expand globally. Meta said similar protections will soon be added to Facebook as regulators tighten oversight of social media and AI systems interacting with minors.

New York City sues tech giants for allegedly fueling youth mental health crisis

New York City has filed a sweeping federal lawsuit against Meta, Google, Snap, TikTok, and ByteDance, accusing them of addicting children to social media and worsening a mental health crisis among young users. The 327-page complaint, lodged in Manhattan federal court, seeks damages for gross negligence and public nuisance, alleging that platforms like Instagram, YouTube, Snapchat, and TikTok were deliberately engineered to exploit the psychology of youth for profit.

The lawsuit claims the companies’ products have contributed to rising rates of depression, sleep deprivation, and chronic absenteeism among minors. According to the city’s data, more than 77% of New York City high school students spend over three hours daily on screens, and 82% of girls report similar habits.

New York’s health commissioner declared social media a public health hazard earlier this year, citing growing taxpayer burdens to combat mental health challenges in schools. The city also linked compulsive platform use to dangerous behaviors such as “subway surfing,” which has caused at least 16 deaths since 2023.

The case joins over 2,000 similar lawsuits filed nationwide, now consolidated in federal court in Oakland, California. A spokesperson for Google rejected the allegations, saying YouTube is a streaming platform rather than a social network. Other defendants have not yet commented.

The city argues that the companies must be held accountable for the harm caused by their algorithms, which it says have created a costly and deadly youth mental health epidemic.

OpenAI to Give Content Owners Control Over Sora AI Videos, Plans Revenue Sharing Model

OpenAI is rolling out new tools to give content owners greater control over how their intellectual property is used in Sora, its recently launched AI video-generation app, and plans to introduce a revenue-sharing system for creators who opt in.

In a blog post on Friday, CEO Sam Altman said OpenAI will soon provide “more granular control over the generation of characters” within Sora, enabling rights holders such as film and television studios to decide how their characters can appear—or to block them entirely.

The move comes amid intensifying scrutiny of AI-generated content and growing concern across Hollywood and the creative industries about copyright infringement and the unauthorized replication of proprietary characters and likenesses.

Sora, launched this week as a standalone app in the United States and Canada, allows users to generate and share AI-created videos up to 10 seconds long. Its social-media-style interface quickly gained traction, with users producing clips based on both original and copyrighted material.

Altman acknowledged that the app’s rapid popularity—and the sheer volume of video creation—has outpaced expectations, creating a need for clear rules and compensation mechanisms. “We’ll experiment with different approaches,” he wrote, adding that the revenue-sharing model would evolve through “trial and error” as OpenAI tests various systems within Sora before applying them to its broader suite of AI tools.

At least one major studio, Disney, has already opted out of allowing its characters to appear in Sora-generated videos, sources familiar with the matter told Reuters. Other studios are reportedly reviewing whether to participate under OpenAI’s forthcoming licensing framework.

The company’s initiative could mark a turning point in the relationship between AI firms and content owners, shifting from conflict to collaboration—if a viable monetization model can be found.

Backed by Microsoft, OpenAI’s expansion into multimodal AI via Sora places it in direct competition with Meta’s Vibes and Google’s text-to-video tools, as major tech firms race to define the future of synthetic media creation.

Still, the effort to give rights holders control over how their creations are used—and to share revenue from those uses—reflects a broader recognition that AI’s creative power must coexist with creator compensation and consent.