Yazılar

AI Browsers Like ChatGPT Atlas and Perplexity Comet Reportedly Able to Circumvent Paywalls

ChatGPT Atlas, Perplexity’s Comet, and several other AI-powered browsers are reportedly able to bypass paywalls and content blockers, raising concerns about the impact on digital publishing. According to a recent report, both Atlas and Comet were able to access and generate content from multiple paywalled articles when prompted to display the information, potentially undermining the subscription-based revenue model of news outlets and premium blogs. If these claims hold true, such capabilities could significantly affect publishers who rely on paid content for income.

The Columbia Journalism Review highlighted that Atlas and Comet were particularly effective at retrieving content hidden behind paywalls, while other AI browsers, including Edge’s Copilot mode and The Browser Company’s Dia, did not demonstrate the same level of success. Both Atlas and Comet are widely available to users, with Comet offering advanced “agentic actions,” which allow the AI to perform complex tasks autonomously, including interacting with websites to retrieve information.

However, follow-up tests indicate that results may vary. When attempting to replicate the experiment with Comet, the browser reportedly refused to provide content behind the same paywalls. This discrepancy suggests that AI providers might have implemented changes to their underlying models or that the results could depend on specific prompt techniques used in the original tests.

The situation underscores ongoing ethical and legal questions regarding AI and content access. Publishers may need to explore new ways to protect their premium material, while developers of AI browsers face scrutiny over whether their tools are enabling unauthorized access. The debate is likely to intensify as AI becomes increasingly capable of interacting with subscription-based and restricted content online.

Reddit Sues Perplexity for Allegedly Scraping Data to Train AI Search Engine

Reddit has filed a lawsuit in a New York federal court against artificial intelligence startup Perplexity, accusing it of unlawfully scraping Reddit data to train its AI-based “answer engine.” The complaint also names three other companies — Lithuania-based Oxylabs, Russia-based AWMProxy, and Texas-based SerpApi — alleging that they bypassed Reddit’s data protection systems to extract massive amounts of content.

According to Reddit, Perplexity “desperately needs” the stolen data to strengthen its search capabilities. The platform, home to thousands of user-driven “subreddit” communities, said its content is one of the most frequently cited sources for AI-generated responses. Reddit has legally licensed its data to OpenAI, Google, and other companies, but claims Perplexity acted without authorization.

The lawsuit follows similar cases across the tech industry involving unauthorized use of copyrighted materials to train AI models. Reddit had previously sued Anthropic in June for similar conduct. Perplexity rejected the accusations, calling its methods “principled and responsible.” Meanwhile, Reddit’s chief legal officer Ben Lee accused AI firms of engaging in “industrial-scale data laundering.”

Reddit is seeking financial damages and a court order preventing Perplexity from continuing to use its content.

New Study Finds Major AI Assistants Frequently Misrepresent News Content

A new international study from the European Broadcasting Union (EBU) and the BBC has found that leading AI assistants—including ChatGPT, Copilot, Gemini, and Perplexitymisrepresented or mishandled news content in nearly half their responses. The research, published Wednesday, examined 3,000 AI-generated answers to news-related questions in 14 languages, assessing factual accuracy, sourcing, and the ability to distinguish fact from opinion.

The findings were troubling: 45% of AI responses contained at least one significant factual or interpretive issue, while 81% showed some form of problem, ranging from poor attribution to incorrect information. Roughly one-third of all replies featured serious sourcing errors, such as missing or misleading references. Notably, 72% of Google’s Gemini outputs contained significant sourcing flaws—far higher than the under 25% rate for other assistants.

Accuracy issues appeared in 20% of total responses, including outdated or false claims. Examples cited include Gemini incorrectly describing legal changes on disposable vapes, and ChatGPT erroneously identifying Pope Francis as still alive months after his reported death.

The study, involving 22 public-service media organizations across 18 countries, warned that the growing use of AI assistants for news—especially among younger audiences—could threaten public trust. According to the Reuters Institute’s 2025 Digital News Report, 15% of people under 25 now rely on AI assistants for news updates.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” said Jean Philip De Tender, EBU’s media director. The report calls for greater accountability and transparency from AI developers to ensure reliable and responsibly sourced information.