Yazılar

New Study Finds Major AI Assistants Frequently Misrepresent News Content

A new international study from the European Broadcasting Union (EBU) and the BBC has found that leading AI assistants—including ChatGPT, Copilot, Gemini, and Perplexitymisrepresented or mishandled news content in nearly half their responses. The research, published Wednesday, examined 3,000 AI-generated answers to news-related questions in 14 languages, assessing factual accuracy, sourcing, and the ability to distinguish fact from opinion.

The findings were troubling: 45% of AI responses contained at least one significant factual or interpretive issue, while 81% showed some form of problem, ranging from poor attribution to incorrect information. Roughly one-third of all replies featured serious sourcing errors, such as missing or misleading references. Notably, 72% of Google’s Gemini outputs contained significant sourcing flaws—far higher than the under 25% rate for other assistants.

Accuracy issues appeared in 20% of total responses, including outdated or false claims. Examples cited include Gemini incorrectly describing legal changes on disposable vapes, and ChatGPT erroneously identifying Pope Francis as still alive months after his reported death.

The study, involving 22 public-service media organizations across 18 countries, warned that the growing use of AI assistants for news—especially among younger audiences—could threaten public trust. According to the Reuters Institute’s 2025 Digital News Report, 15% of people under 25 now rely on AI assistants for news updates.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” said Jean Philip De Tender, EBU’s media director. The report calls for greater accountability and transparency from AI developers to ensure reliable and responsibly sourced information.