Yazılar

EU Tech Companies Agree to Stronger Measures Against Online Hate Speech

Meta’s Facebook, Elon Musk’s X, Google’s YouTube, and other tech giants have agreed to enhance their efforts to combat online hate speech under a revised code of conduct, which will now be incorporated into the European Union’s Digital Services Act (DSA). The update aims to make these platforms more accountable in tackling harmful content.

Key Points:

  • Revised Code of Conduct: Facebook, X, YouTube, and others have committed to improving their approach to addressing illegal hate speech on their platforms, under the updated voluntary code of conduct, initially launched in May 2016. This code will now align with the requirements of the EU’s Digital Services Act (DSA), which mandates tech companies to take stronger action against harmful and illegal online content.
  • Tech Companies’ Pledge: In addition to enhancing detection mechanisms, companies like Instagram, LinkedIn, TikTok, and Twitch, alongside the bigger players, have agreed to measures such as using automatic detection tools for hate speech and ensuring that at least two-thirds of hate speech notices are reviewed within 24 hours. They will also provide data on how their recommendation systems contribute to the spread of harmful content.
  • Transparency and Oversight: The updated code will also allow public and non-profit entities with expertise in hate speech to monitor how platforms handle hate speech notices. This will increase the transparency and accountability of tech companies, with a focus on issues like race, ethnicity, religion, and gender identity.
  • EU’s Position on Hate Speech: EU tech commissioner Henna Virkkunen emphasized that the European Union has no tolerance for illegal hate speech, whether online or offline. The strengthened code aligns with the DSA, which is pushing for stricter regulations on tech companies to address online harms and ensure that harmful content is swiftly removed.

Brazil Challenges Meta’s Hate Speech Policy Changes as Non-Compliant with Local Law

Brazil’s government expressed “serious concern” on Tuesday over Meta Platforms’ recent changes to its hate speech policy, stating that the modifications do not align with the country’s legal framework. The announcement comes after Meta, which owns Facebook, Instagram, and Threads, reduced restrictions on discussions surrounding sensitive issues such as immigration and gender identity and ended its fact-checking program in the United States.

President Luiz Inácio Lula da Silva had previously criticized Meta’s policy adjustments, calling them “extremely serious.” The Brazilian government has now demanded clarification from the social media giant on its plans. Facebook remains highly influential in Brazil, with approximately 100 million active users, making it one of Meta’s largest markets.

The government did not specify which aspects of Meta’s new policy might violate Brazilian law but warned that the changes could “create fertile ground” for legal breaches, particularly those protecting fundamental rights. Brazil’s legislation prohibits hate speech, including racial slurs and attacks on religious beliefs.

In response, Meta clarified in a letter to the Brazilian government that the recent changes to its fact-checking program were currently limited to the U.S. The company also stated that updates to its community standards primarily affected hate speech policies and were intended to promote greater freedom of expression.

However, Brazil’s Solicitor General’s Office (AGU) criticized Meta’s response, saying that the changes did not adequately comply with Brazil’s legislation or ensure the protection of citizens’ rights. The AGU emphasized that aspects of Meta’s revised hate speech policy, applicable to Brazil, raised “serious concerns.”

Brazil plans to hold a public hearing this week to discuss the implications of Meta’s policy changes with experts. The case recalls a similar instance last year when the Brazilian Supreme Court suspended X’s (formerly Twitter) operations for over a month due to non-compliance with court orders related to hate speech moderation. X’s owner, Elon Musk, initially condemned the court’s actions as censorship but ultimately complied with demands to reinstate operations in the country.

Brazil’s move highlights its commitment to regulating social media platforms and enforcing local laws to protect citizens from harmful content.

 

US TikTok Users React as ByteDance Signals App Shutdown

Disappointment, confusion, and frustration swept through TikTok’s U.S. user base on Wednesday after reports emerged that ByteDance, the app’s Chinese owner, is planning to shut down the platform for 170 million U.S. users by Sunday. The announcement seemingly marks a concession to U.S. lawmakers who imposed a deadline for ByteDance to divest its U.S. assets or face a ban, leading many users to express resignation after months of uncertainty.

TikTok users, who have built careers and substantial followings on the platform, had hoped that the app could escape a U.S. ban passed in 2023. However, as the January 19 deadline looms, some users are beginning to accept the impending shutdown. Joonsuk Shin, a 28-year-old research manager and content creator from New York, expressed his dismay, saying, “TikTok signaling that white flag is very discouraging and very sad.”

In response, some users have called for boycotts of rival platforms like Meta’s Instagram and Facebook, as well as X (formerly Twitter), predicting that advertisers who once relied heavily on TikTok will shift to those services. One user posted, “We all need to delete our Facebook, X, and Instagram accounts that same day.”

The shutdown follows U.S. lawmakers’ concerns about national security risks, with fears that China could potentially access or demand U.S. user data from TikTok. While the company has repeatedly denied any claims of sharing user data with the Chinese government, the threat of a ban has become imminent. TikTok and ByteDance have been fighting the law in court, arguing that the ban violates the First Amendment rights of free speech.

If the U.S. Supreme Court does not intervene, users trying to access TikTok on Sunday will be redirected to a shutdown website, confirming the app’s termination. Content creators like Amber Goode, a 28-year-old true crime influencer from Colorado Springs, expressed frustration over the prolonged uncertainty, remarking, “Why are they playing with us? I feel like the government is avoiding giving us the answer they already know.”

Although some users are now preparing for the worst, others remain hopeful. There were reports earlier this week that a 270-day extension of the shutdown deadline might be in the works, but this prospect was fading as the deadline approached. As TikTok’s fate hangs in the balance, many users have already begun migrating to alternatives, including China-based apps like RedNote, often using translation tools to navigate the platforms.

For others, the situation is deeply personal. One TikTok user shared a heartfelt post, saying, “My daughter passed away in 2023. I’ve been saving all her videos to my phone. I can’t lose those.” The impending shutdown is forcing many creators to scramble to preserve their content and maintain connections with their followers.

Ishpal Sidhu, a former attorney turned full-time creator with nearly 400,000 followers, voiced her concern about losing her platform and income, saying, “It’s pretty sad because I thought we were making progress.” Sidhu wondered if she would continue receiving payments for her content once the app shuts down.

Meanwhile, users outside the U.S. have begun to react more bluntly, with some expressing relief that the algorithm-driven chaos of American social media woes might now recede from their feeds. New Zealand content creator Luke Hopewell jokingly declared, “Say goodbye to the Americans.”