Yazılar

Motion Picture Association Orders Meta to Drop “PG-13” Label from Instagram Teen Filters

The Motion Picture Association (MPA) has issued a cease-and-desist letter to Meta, accusing the social media giant of misleadingly using the film industry’s “PG-13” rating in its new content filters for teen users on Instagram. The group said Meta’s claim that its filters are modeled on the movie rating system is “literally false and highly misleading.”

Meta announced last month that it would restrict what users under 18 see on Instagram by applying filters “inspired by the PG-13 rating system.” The MPA, however, says the comparison is inappropriate, emphasizing that its rating process involves a curated, consensus-driven assessment by human reviewers — not automated algorithms.

In an October 28 letter to Meta Chief Legal Officer Jennifer Newstead, the MPA demanded that the company immediately stop using the “PG-13” mark and disassociate its Teen Accounts and AI moderation tools from the film rating system, warning that unauthorized use could undermine public trust in movie ratings. The association asked Meta to resolve the issue by November 3.

A Meta spokesperson said the company had no intention of implying a partnership with the MPA and hopes to “work constructively” with the association to address concerns. Meta said the filter initiative was designed to give parents greater control over what teenagers see on its platforms.

The dispute comes as Meta faces growing scrutiny from regulators and advocacy groups over the safety of its younger users. The company has also faced lawsuits alleging that its social platforms expose minors to harmful content.

Snapchat’s New AI Chatbot Sparks Concerns Over Privacy and Safety, Particularly Among Teens and Parents

Snapchat’s recent introduction of its My AI chatbot has raised alarms among parents and some users, particularly due to the feature’s interaction with younger audiences. Launched last week, My AI is powered by ChatGPT and offers personalized recommendations, answers to questions, and the ability to converse. However, Snapchat’s version differs significantly from ChatGPT by allowing users to customize the chatbot’s appearance and integrate it into their existing conversations with friends, making it feel more personal and potentially blurring the line between human interaction and AI.

Lyndsi Lee, a mother from East Prairie, Missouri, expressed concerns about how her 13-year-old daughter might interact with My AI. “It’s a temporary solution until I know more about it and can set some healthy boundaries,” Lee said, highlighting the difficulty of teaching children how to distinguish between real and artificial interactions, especially when the AI chatbot looks and feels like a human.

Beyond parental concerns, Snapchat users have voiced their displeasure with the chatbot. Many criticize privacy issues, “creepy” conversations, and the inability to remove the feature from their chat feed unless they pay for the premium Snapchat+ subscription. Some users have reported disturbing interactions with the bot, such as misleading responses and unacknowledged contributions in collaborative activities, like songwriting.

In a letter to Snapchat’s executives, U.S. Senator Michael Bennet raised issues about the chatbot’s role in guiding younger users, particularly its potential to suggest deceptive behavior. This has raised fears about how easily vulnerable teens could be manipulated or misled by AI-powered tools on social media platforms.

While some users have found value in the chatbot, using it for homework help and personal advice, the mixed reactions point to the challenges and risks involved in integrating generative AI into widely used platforms like Snapchat, which is especially popular among teenagers.

Experts are also concerned about the psychological effects of AI on teenagers. Clinical psychologist Alexandra Hamlet warns that chatbots could reinforce negative emotional states, as teens might turn to AI for advice when in distress, further exacerbating their mental health challenges.

As AI tools like Snapchat’s My AI become increasingly integrated into apps popular with young people, experts advise parents to engage in open conversations with their children about how to responsibly use these technologies. Sinead Bovell, founder of WAYE, a startup focused on preparing youth for the future, emphasized that “chatbots are not your friend” and urged parents to educate their children about the risks of sharing personal information with AI.

The rapid advancement of AI technology calls for clearer regulations to ensure user safety and privacy, particularly when young users are involved.

 

Chroming: A Dangerous Trend Threatening Youth

The practice of “chroming,” a form of inhalant abuse, has emerged as a concerning trend among youth, akin to older practices like huffing. It involves inhaling fumes from everyday products like markers, aerosol sprays, and metallic paint to experience a high. Experts like Dr. Anthony Pizon from the University of Pittsburgh warn that this method is incredibly risky, sometimes leaving users with metallic residue on their face due to the paint.

While inhalant abuse declined in past decades, experimentation among teenagers is once again on the rise, according to data from the U.S. Substance Abuse and Mental Health Services Administration. In 2023, around 564,000 adolescents in the U.S. engaged in inhalant abuse, with experts noting that underlying mental health issues such as anxiety and depression often fuel the trend. The rise of social media also plays a role, with platforms like TikTok attempting to curb content that promotes inhalant misuse.

Chroming has immediate effects similar to alcohol intoxication, such as dizziness, slurred speech, and euphoria. However, the consequences of continued use can be severe, including kidney and liver damage, neurological issues, and even death. The unpredictability of these outcomes is alarming—some users suffer fatal consequences from a single session.

Prevention strategies can be challenging since most of the substances used are common household items. Experts recommend open, empathetic conversations with children, discussing the dangers of chroming without judgment. Parents should also consider securing potentially dangerous products and monitoring social media activity to reduce exposure to content that normalizes substance abuse.