Yazılar

New Study Finds Major AI Assistants Frequently Misrepresent News Content

A new international study from the European Broadcasting Union (EBU) and the BBC has found that leading AI assistants—including ChatGPT, Copilot, Gemini, and Perplexitymisrepresented or mishandled news content in nearly half their responses. The research, published Wednesday, examined 3,000 AI-generated answers to news-related questions in 14 languages, assessing factual accuracy, sourcing, and the ability to distinguish fact from opinion.

The findings were troubling: 45% of AI responses contained at least one significant factual or interpretive issue, while 81% showed some form of problem, ranging from poor attribution to incorrect information. Roughly one-third of all replies featured serious sourcing errors, such as missing or misleading references. Notably, 72% of Google’s Gemini outputs contained significant sourcing flaws—far higher than the under 25% rate for other assistants.

Accuracy issues appeared in 20% of total responses, including outdated or false claims. Examples cited include Gemini incorrectly describing legal changes on disposable vapes, and ChatGPT erroneously identifying Pope Francis as still alive months after his reported death.

The study, involving 22 public-service media organizations across 18 countries, warned that the growing use of AI assistants for news—especially among younger audiences—could threaten public trust. According to the Reuters Institute’s 2025 Digital News Report, 15% of people under 25 now rely on AI assistants for news updates.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” said Jean Philip De Tender, EBU’s media director. The report calls for greater accountability and transparency from AI developers to ensure reliable and responsibly sourced information.

UN Report Calls for Stronger Measures to Detect and Combat AI-Driven Deepfakes

The United Nations’ International Telecommunication Union (ITU) has urged companies to adopt advanced tools to detect and eliminate misinformation and deepfake content, highlighting the growing threats these pose to elections and financial security. The call was made in a report released on Friday during the ITU’s “AI for Good Summit” in Geneva.

Deepfakes—AI-generated images, videos, and audio that convincingly mimic real people—are increasingly used to spread false information, the ITU warned. To tackle this, the report recommended robust standards for combating manipulated multimedia and urged platforms like social media sites to implement digital verification tools to authenticate content before sharing.

Bilel Jamoussi, head of the ITU’s Standardization Bureau’s Study Groups Department, noted that public trust in social media has dropped sharply because users struggle to distinguish truth from fake. Generative AI’s ability to fabricate realistic multimedia makes combating deepfakes a particularly pressing challenge.

Leonard Rosenthol from Adobe, a leading digital editing software company addressing deepfakes since 2019, emphasized the need for content provenance—information about the origin of digital media—to help users judge trustworthiness. “When scrolling feeds, users want to know: ‘Can I trust this image or video?’” he said.

Dr. Farzaneh Badiei, founder of Digital Medusa, a digital governance research firm, stressed the need for a coordinated global response, noting the lack of a single international body focused on detecting manipulated media. She warned that fragmented standards could make harmful deepfakes more effective.

The ITU is developing standards for watermarking videos—which constitute 80% of internet traffic—to embed provenance data such as creator identity and timestamps.

Tomaz Levak, founder of Swiss firm Umanitek, called on the private sector to proactively adopt safety measures and educate users. “AI will become more powerful and faster… We must upskill people to avoid them becoming victims,” he said.

Italy Probes Chinese AI Firm DeepSeek Over Misinformation Risks

Italy’s antitrust and consumer protection agency AGCM announced Monday it has launched a formal investigation into Chinese artificial intelligence startup DeepSeek, alleging the company failed to clearly warn users about the potential for its chatbot to generate false or misleading information.

The regulator stated that DeepSeek’s platform does not provide “sufficiently clear, immediate and intelligible” alerts about the risk of AI-generated “hallucinations” — a term used in the AI field to describe instances when models produce inaccurate or completely fabricated information in response to user prompts.

AGCM is focusing on the consumer rights aspect, emphasizing the risk users might unknowingly rely on erroneous AI outputs due to insufficient warning or transparency.

DeepSeek did not immediately respond to requests for comment on the investigation.

This marks the second run-in DeepSeek has had with Italian authorities this year. In February, the country’s data protection regulator ordered the startup to suspend access to its chatbot within Italy after the company failed to resolve concerns related to its privacy policy.

The probe highlights increasing regulatory scrutiny over generative AI models in Europe, particularly regarding transparency, data protection, and consumer rights.