Yazılar

Meta’s Oversight Board Criticizes Company for Policy Overhaul Decisions

Meta Platforms’ Oversight Board has issued a strong rebuke to the company over a policy overhaul implemented in January, which reduced fact-checking efforts and relaxed restrictions on discussions surrounding sensitive issues like immigration and gender identity. The board, which operates independently but is funded by Meta, expressed concerns that the changes were made too quickly and without adequate transparency or human rights due diligence. These modifications, announced just before the start of U.S. President Donald Trump’s second term, have raised alarms about their potential to worsen harmful content on Meta’s platforms.

The Oversight Board criticized Meta for making the policy changes “hastily” and without following the usual procedures. The board emphasized the need for the company to assess the “potential adverse effects” these changes could have, particularly in terms of their impact on social discourse and human rights. This public reprimand highlights a growing tension between Meta’s leadership, particularly CEO Mark Zuckerberg, and the Oversight Board, which has been increasingly scrutinizing the company’s decisions. Zuckerberg, who has been working to repair his relationship with Trump, is under pressure as he scales back measures aimed at limiting the spread of hate speech, misinformation, and violence on his platforms.

As part of its ongoing evaluations, the Oversight Board recently issued its first rulings on individual content cases since the January policy changes. In some instances, the board upheld Meta’s decisions to leave up controversial content, such as posts discussing transgender people’s access to bathrooms. In other cases, however, the board ruled that Meta must remove posts containing racist slurs, underscoring the complex balance the company must strike between protecting free expression and addressing harmful content.

Meta responded to the board’s rulings with a statement that highlighted its approval of decisions that supported free speech by leaving up or restoring certain content. However, the company did not directly address the board’s rulings that required content removal. This ongoing debate reflects the broader challenges that Meta faces in managing content moderation, especially as the company navigates the delicate intersection of freedom of expression and the need to protect users from harmful and discriminatory speech.

Meta to Launch “Community Notes” in the U.S. Using X’s Algorithm

Meta will begin testing its new Community Notes feature in the U.S. starting March 18, utilizing technology from Elon Musk’s X, the company announced on Thursday. This move comes two months after Meta scrapped its fact-checking program under pressure from conservatives, signaling a shift from traditional fact-checking to a crowd-sourced model.

The feature will allow users to write and rate notes to flag false or misleading content across Instagram, Facebook, and Threads, effectively replacing the third-party fact-checkers that were previously responsible for content moderation. 200,000 U.S. users have already signed up as potential contributors to the new system.

Meta’s switch to the Community Notes program represents a significant overhaul in its approach to content management. The company has been keen to improve its relationship with the Trump administration, which has criticized social media platforms for silencing conservative voices. President Donald Trump praised Meta’s decision in January, acknowledging the shift toward a more inclusive and less biased content moderation process.

To power Community Notes, Meta will adopt X’s open-source algorithm, which was originally developed as part of X’s Birdwatch feature. The system, now known as Community Notes, allows users to contribute and vote on content’s accuracy. Meta’s version will limit notes to 500 characters and initially support six languages: English, Spanish, Chinese, Vietnamese, French, and Portuguese. Notes will remain anonymous and will be published only if users with differing viewpoints agree that the note provides helpful context.

Contributors must be over 18 and include a supporting link when posting notes. Meta has emphasized that this system will be less biased than the previous third-party fact-checking method. Once the new system is in place, third-party fact-check labels will no longer appear on U.S. content.

Meta, which boasts over 3 billion global users, continues to collaborate with nearly 100 certified fact-checking organizations across 60+ languages, according to the company.

Meta Scraps U.S. Fact-Checking Program Ahead of Trump Administration’s Return

Meta Platforms (META.O) has announced the discontinuation of its fact-checking program in the U.S. and a reduction in its restrictions on controversial topics such as immigration and gender identity. This move, which represents a significant shift in Meta’s approach to political content, comes as the company adjusts to the expected return of President-elect Donald Trump to office.

The decision is seen as a response to conservative criticism, and CEO Mark Zuckerberg has emphasized the importance of returning to the company’s roots in promoting free expression. Meta will instead adopt a “community notes” system, which allows users to contribute to content moderation, similar to the model used by Elon Musk’s X platform. In addition, Meta will scale back its proactive efforts to detect and remove rule-breaking content, focusing its automated systems on high-severity violations like terrorism, child exploitation, and fraud.

Meta’s overhaul of its content moderation approach includes the relocation of teams responsible for writing and reviewing content policies from California to Texas and other U.S. locations. These changes are a result of more than a year of discussions within the company, although the specific details of the relocation remain unclear.

The decision to end the fact-checking program, initiated in 2016, has taken its partner organizations by surprise. Critics argue that the shift may facilitate the spread of disinformation, with some claiming it is politically motivated. Meta’s independent Oversight Board expressed support for the move, while fact-checkers and other journalistic organizations expressed concerns about the impact on credibility.

While these changes are initially limited to the U.S. market, Meta has not yet indicated whether similar adjustments will be made in other regions like the European Union, which has stricter tech regulations under its Digital Services Act.