Behind the Screens: The Dark Reality of Online Content Moderation

In recent months a hidden world—one where the worst of the internet’s content, including violent and illegal material, is filtered by human moderators. These individuals work tirelessly, reviewing and removing distressing content flagged by users or AI. Their role is critical in today’s digital landscape, where tech giants face mounting pressure to maintain online safety.

While automated tools have improved, human moderators still handle the final screening of content on platforms like Instagram, Facebook, and TikTok. Many moderators are hired by third-party companies and work globally, often in countries like East Africa. The BBC’s investigative series, The Moderators, interviewed former moderators who had quit due to the traumatic nature of the work. Their stories reveal a stark reality: “I personally was moderating…horrific and traumatizing videos,” recalls Mojez, a former TikTok moderator in Nairobi. “Let my mental health take the punch so that general users can continue going about their activities on the platform.”

Many ex-moderators describe their experiences as deeply traumatizing, impacting their mental health and personal lives. Some are now involved in legal claims against their former employers, arguing that the work has caused long-lasting psychological harm. In one notable 2020 case, Meta (formerly Facebook) settled for $52 million to compensate US-based moderators who suffered mental health issues from similar work. These “keepers of souls,” as some call themselves, often struggle with symptoms like sleeplessness, panic, and difficulty interacting with loved ones after being exposed to disturbing material.

One might expect moderators to advocate for full automation of this work, but many expressed pride in their roles. David, a former moderator involved in training AI models for ChatGPT, compared himself to a first responder, believing he had a critical role in protecting the online community. However, the future of AI in content moderation could threaten their jobs. AI-based tools like those developed by OpenAI have shown a 90% accuracy rate in identifying harmful content, and some believe AI will eventually assume a larger role.

Yet, there are reservations. Experts like Dr. Paul Reilly from the University of Glasgow warn that AI is still too simplistic for nuanced moderation and may infringe on free speech by over-blocking content. Although technology firms like TikTok, OpenAI, and Meta provide resources for moderator welfare, including counseling services and customizable review tools, the emotional toll remains significant.

Ultimately, human moderators provide an irreplaceable perspective, but as AI tools advance, the question remains: how can we ensure the well-being of those protecting us from the internet’s darkest content?