Yazılar

Snapchat’s New AI Chatbot Sparks Concerns Over Privacy and Safety, Particularly Among Teens and Parents

Snapchat’s recent introduction of its My AI chatbot has raised alarms among parents and some users, particularly due to the feature’s interaction with younger audiences. Launched last week, My AI is powered by ChatGPT and offers personalized recommendations, answers to questions, and the ability to converse. However, Snapchat’s version differs significantly from ChatGPT by allowing users to customize the chatbot’s appearance and integrate it into their existing conversations with friends, making it feel more personal and potentially blurring the line between human interaction and AI.

Lyndsi Lee, a mother from East Prairie, Missouri, expressed concerns about how her 13-year-old daughter might interact with My AI. “It’s a temporary solution until I know more about it and can set some healthy boundaries,” Lee said, highlighting the difficulty of teaching children how to distinguish between real and artificial interactions, especially when the AI chatbot looks and feels like a human.

Beyond parental concerns, Snapchat users have voiced their displeasure with the chatbot. Many criticize privacy issues, “creepy” conversations, and the inability to remove the feature from their chat feed unless they pay for the premium Snapchat+ subscription. Some users have reported disturbing interactions with the bot, such as misleading responses and unacknowledged contributions in collaborative activities, like songwriting.

In a letter to Snapchat’s executives, U.S. Senator Michael Bennet raised issues about the chatbot’s role in guiding younger users, particularly its potential to suggest deceptive behavior. This has raised fears about how easily vulnerable teens could be manipulated or misled by AI-powered tools on social media platforms.

While some users have found value in the chatbot, using it for homework help and personal advice, the mixed reactions point to the challenges and risks involved in integrating generative AI into widely used platforms like Snapchat, which is especially popular among teenagers.

Experts are also concerned about the psychological effects of AI on teenagers. Clinical psychologist Alexandra Hamlet warns that chatbots could reinforce negative emotional states, as teens might turn to AI for advice when in distress, further exacerbating their mental health challenges.

As AI tools like Snapchat’s My AI become increasingly integrated into apps popular with young people, experts advise parents to engage in open conversations with their children about how to responsibly use these technologies. Sinead Bovell, founder of WAYE, a startup focused on preparing youth for the future, emphasized that “chatbots are not your friend” and urged parents to educate their children about the risks of sharing personal information with AI.

The rapid advancement of AI technology calls for clearer regulations to ensure user safety and privacy, particularly when young users are involved.

 

Hong Kong Plans Mass Surveillance Expansion, Raising Concerns Over Mainland-Style Control

The streets of Hong Kong are set to see a significant increase in surveillance as the city’s police plan to install thousands of new cameras. This move has sparked concern among critics who fear the expansion will push Hong Kong closer to the mainland Chinese model of surveillance, where facial recognition and artificial intelligence (AI) play a critical role in maintaining control.

Despite Hong Kong’s reputation as one of the safest major cities in the world, local authorities argue that the new cameras are essential for fighting crime. The plan includes potentially equipping these devices with facial recognition technology and AI tools in the future.

The Hong Kong Police Force has already set an ambitious goal of installing 2,000 new surveillance cameras this year, with the possibility of continuing at a similar rate in the years to come. Security Chief Chris Tang has suggested that AI might be used to help track down suspects, following the lead of other countries that have implemented such technologies. However, the exact number of cameras that will have facial recognition capabilities and the timeline for introducing this technology remain unclear.

Comparisons and Concerns

Tang has pointed to countries like Singapore and the United Kingdom, which have integrated extensive surveillance networks, as examples to justify Hong Kong’s expansion. Singapore, for instance, has 90,000 cameras, while the UK leads with over seven million. Though some Western nations have started using facial recognition technology, these cases have highlighted the need for strict regulations and privacy safeguards—areas where critics believe Hong Kong may fall short.

Hong Kong’s political environment, which has shifted significantly since the 2019 pro-democracy protests, adds a layer of complexity to the debate. The protests were followed by the introduction of a sweeping national security law that has since been used to suppress dissent and imprison activists. Critics argue that introducing facial recognition and other AI-powered surveillance tools in such an environment raises concerns about their use for political repression.

Samantha Hoffman, a nonresident fellow at the National Bureau of Asian Research, notes the stark difference between surveillance in Hong Kong and Western democracies. While countries like the U.S. or UK may have issues with implementing surveillance technology, the system in Hong Kong is fundamentally different due to its authoritarian political context. This makes the city’s potential shift toward mainland China’s surveillance practices particularly troubling.

Surveillance in Hong Kong vs. Mainland China

While Hong Kong currently has about 54,500 public CCTV cameras, or roughly seven cameras per 1,000 people, this is far fewer than the 440 cameras per 1,000 people seen in mainland Chinese cities. Mainland China’s surveillance network is notoriously comprehensive, with facial recognition a part of everyday life, from registering phone numbers to subway gates.

The fear that Hong Kong could follow this model has deepened in recent years. During the 2019 protests, demonstrators took steps to conceal their identities by covering their faces or destroying cameras, as they worried about the encroachment of mainland-style surveillance.

One notable incident saw activists tearing down a “smart” lamp post that authorities claimed was only gathering environmental data. Joshua Wong, a prominent activist now imprisoned, voiced concerns that these devices could be equipped with facial recognition, reflecting broader fears about Beijing’s growing influence over the city.

Looking Ahead: Surveillance Regulation and Public Trust

The Hong Kong Police Force has stated that the new cameras will only monitor public spaces and that footage will be deleted after 31 days. They have also promised to comply with privacy laws, but critics remain skeptical. Steve Tsang, director of the SOAS China Institute, warns that without clear assurances, the new surveillance system could be used to suppress political dissent under the national security law.

Other experts, like Normann Witzleb, an associate professor at the Chinese University of Hong Kong, emphasize the need for a thorough regulatory framework. It remains unclear how authorities will use facial recognition—whether it will scan environments in real time or be used retrospectively to analyze footage in specific cases. Questions also linger over who will control the technology, under what circumstances it will be deployed, and whether it will integrate with other government databases.

As Hong Kong moves forward with its surveillance expansion, experts like Samantha Hoffman argue that the very presence of these cameras could create a chilling effect on public behavior. The perception of constant monitoring may undermine the sense of freedom in a city once known for its semi-autonomy and political liberties.