Yazılar

French Prosecutors Investigate Musk’s X Over Alleged Algorithmic Bias

French prosecutors have opened an investigation into Elon Musk’s X (formerly Twitter) over allegations of algorithmic bias. The inquiry comes just days before the upcoming AI summit in Paris, which will be attended by prominent global leaders, including U.S. Vice President JD Vance and Indian Prime Minister Narendra Modi, as well as executives from Alphabet and Microsoft.

The investigation began after a lawmaker, Eric Bothorel, raised concerns that X’s algorithms were likely distorting automated data processing systems. Bothorel wrote to the Paris prosecutor’s office on January 12, prompting the J3 cybercrime unit to launch technical checks. Bothorel also posted about the matter on X, urging further scrutiny of the platform.

X has not responded to requests for comment on the matter. The investigation underscores mounting global scrutiny of Musk’s platform, which has been criticized for potential foreign interference, particularly due to Musk’s personal support for right-wing causes in countries like Germany and the UK. X has previously been involved in legal battles over misinformation, notably being blocked in Brazil last year for not adhering to Supreme Court orders regarding the spread of false information.

DeepSeek’s Chatbot Scores Low in NewsGuard Audit, Trails Western Rivals

DeepSeek, a Chinese AI startup, saw its chatbot underperform in a recent NewsGuard audit, achieving just 17% accuracy in delivering accurate news and information. The audit compared DeepSeek’s chatbot with Western AI models, including OpenAI’s ChatGPT and Google’s Gemini, ranking it tenth out of eleven. DeepSeek’s chatbot was found to repeat false claims 30% of the time and provide vague or unhelpful answers 53% of the time in response to news-related queries, leading to an overall fail rate of 83%. In contrast, Western competitors had an average fail rate of 62%.

This performance raises questions about the quality of DeepSeek’s AI technology, which the company has touted as being on par with or superior to OpenAI’s models, at a fraction of the cost. Despite its low accuracy score, DeepSeek’s chatbot quickly became the most downloaded app on Apple’s App Store, igniting concerns about the United States’ dominance in AI and contributing to a market downturn that resulted in a $1 trillion loss in U.S. tech stocks.

NewsGuard used 300 identical prompts to assess DeepSeek and its Western counterparts, including 30 based on false claims circulating online. The topics of these prompts included incidents like the killing of UnitedHealthcare executive Brian Thompson and the downing of Azerbaijan Airlines flight 8243. DeepSeek’s chatbot also reiterated the Chinese government’s stance on certain issues, even when those topics were unrelated to China, such as in the case of the Azerbaijan Airlines crash.

Despite its poor accuracy, some analysts suggest the significance of DeepSeek’s breakthrough lies in its affordability, with D.A. Davidson’s Gil Luria pointing out that it can answer questions at 1/30th the cost of comparable models. However, as with other AI models, DeepSeek was found to be particularly susceptible to repeating false claims, especially when used to create or spread misinformation.

 

Brazil Judge Demands Big Tech Compliance with Local Laws to Continue Operations

Brazilian Supreme Court judge Alexandre de Moraes stated on Wednesday that tech firms must comply with local laws to remain operational in the country, highlighting the government’s firm stance on regulating online platforms. While he did not name any specific companies, his remarks followed a recent announcement by Meta to scale back its U.S. fact-checking program and reduce restrictions on discussions about sensitive issues like immigration and gender identity.

Moraes, speaking at an event marking the second anniversary of the 2021 riots in Brazil, emphasized that the court would not allow companies to profit from hate speech. “In Brazil, (the companies) will only continue to operate if they respect Brazilian legislation, regardless of the rant of Big Tech managers,” he asserted.

This statement comes after Brazil’s Supreme Court had temporarily suspended the social media platform X (formerly Twitter) for over a month last year for failing to comply with court orders, including those related to moderating hate speech. Judge Moraes issued the initial suspension order, which was later unanimously upheld by a five-member panel. In response, X’s owner, Elon Musk, denounced the action as censorship but ultimately complied by blocking certain accounts to resume operations in Brazil.

In a separate development, Brazilian prosecutors have ordered Meta to clarify whether its changes to the fact-checking program in the U.S. will also apply in Brazil. Meta, which did not comment on the matter through its Brazil office, was given a 30-day deadline to respond. This order is part of an ongoing investigation into how social media platforms address misinformation and online violence in Brazil.