Yazılar

Why You’re More Likely to Solve Your Problems on a Therapist’s Sofa Than on Social Media

In an era where mental health issues are increasingly acknowledged, many individuals are turning to platforms like TikTok for guidance rather than seeking professional help. A recent 2024 KFF Health Misinformation Tracking Poll revealed that 66% of adult TikTok users have encountered mental health content on the app.

Dr. Thomas Milam, a psychiatrist and chief medical officer at Iris Telehealth, noted that many TikTok users seek mental health advice through the platform due to the shortage of mental health providers and the difficulty in accessing affordable care. “The majority of people that are accessing TikTok are going to at some point seek some type of mental health guidance,” he explained.

While the rise of mental health discussions on social media can be seen as a positive development, it poses significant risks. Lindsay Liben, a psychotherapist based in New York City, cautioned against diagnosing problems based on social media content. Many posts are created by individuals without proper mental health training, leading to the spread of misleading or inaccurate information. For instance, a 2023 study published in the Journal of Autism and Developmental Disorders found that 41% of TikTok videos related to autism were inaccurate, and a 2022 study in The Canadian Journal of Psychiatry reported that 52% of ADHD-related videos contained misleading claims.

Despite TikTok’s efforts to combat misinformation by working with independent partners and providing a Safety Center for reliable health information, diagnosing mental health conditions through social media remains problematic. Symptoms such as low energy and fatigue can indicate various issues, from anxiety to sleep deprivation, complicating self-diagnosis efforts.

Moreover, parents seeking solutions for their children’s sleep issues might overlook deeper problems, like bullying, as highlighted by Liben. Misinterpreting normal feelings of worry or sadness as mental health disorders can also lead to confusion and unnecessary anxiety.

A further concern is that some creators on social media promote products like sleep aids and vitamins alongside their mental health content, often oversimplifying complex issues. Milam emphasized that quick fixes are rarely effective for serious conditions like anxiety or depression, which require nuanced approaches. When solutions fail, it can exacerbate feelings of inadequacy among individuals trying to improve their mental health.

For those looking for credible mental health resources online, experts recommend seeking content from licensed professionals, such as doctors or licensed therapists, who are transparent about their qualifications. It’s essential to verify the educational backgrounds and training of content creators and to rely on sources that reference high-quality research.

Milam suggests that individuals who suspect they may have mental health concerns should first reach out to their primary care physicians, who can offer guidance and referrals to mental health specialists. Resources from the American Psychiatric Association and the American Psychological Association can also provide reliable information.

Ultimately, while social media can facilitate discussions around mental health, experts agree that addressing these issues effectively requires more than a quick video. The most reliable answers are often found on the traditional therapist’s sofa, where professional support can lead to meaningful solutions.

 

TikTok Reduces Workforce Amid Transition to AI-Powered Content Moderation

TikTok, the popular social media platform owned by ByteDance, has begun a major reduction in its workforce, signaling a shift towards AI-driven content moderation. The layoffs, which number in the hundreds globally, come as the company seeks to leverage artificial intelligence to improve its content review processes, a move seen as more cost-effective and efficient than relying solely on human moderators. A significant portion of these layoffs reportedly impact employees in Malaysia, where TikTok has a large content moderation team.

Initial reports suggested that over 700 staff members in Malaysia were affected by the layoffs. However, ByteDance later clarified that the number was less than 500, attempting to downplay the extent of the workforce reduction. This decision highlights a growing trend among social media companies, which are increasingly turning to AI to handle the complex and large-scale task of moderating user-generated content.

Employees impacted by the layoffs, primarily content moderators, were reportedly notified of their job termination via email. Most of these individuals were responsible for monitoring TikTok’s content for policy compliance, such as identifying and removing harmful or inappropriate videos. Sources close to the matter indicated that the email notifications were sent late on Wednesday, leaving many staff members uncertain about their next steps.

This transition to AI moderation reflects TikTok’s commitment to more efficient and potentially less biased content review. However, it also raises questions about the accuracy of AI in distinguishing between acceptable and inappropriate content, particularly in sensitive or nuanced cases. As TikTok continues to expand globally, the company’s reliance on AI could redefine content moderation standards across the industry.

ByteDance Fires Intern for Sabotaging AI Training Project

ByteDance, the parent company of TikTok, has terminated an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models. The incident has garnered significant attention on social media over the weekend, prompting ByteDance to clarify the details surrounding the event.

The intern, who worked in the advertising technology team, reportedly lacked experience in the AI Lab. In a statement, ByteDance emphasized that the intern’s actions did not significantly disrupt its commercial online operations, including the company’s large language AI models.

ByteDance refuted claims that the incident led to over $10 million (£7.7 million) in damages by disrupting an AI training system reliant on thousands of powerful graphics processing units (GPUs). The company characterized such reports as containing “exaggerations and inaccuracies.”

In addition to firing the intern in August, ByteDance has notified the individual’s university and relevant industry bodies about the situation. The Chinese technology giant is known for its popular social media applications, including TikTok and its Chinese counterpart Douyin, and is recognized as a leader in algorithm development.

With a significant investment in AI, ByteDance utilizes the technology for various applications, including its Doubao chatbot, which has emerged as the most popular AI chatbot in China, as well as a text-to-video tool named Jimeng.