Anthropic CEO Dario Amodei Claims AI Models Experience Fewer Hallucinations Than Humans: Report

Anthropic CEO Dario Amodei recently stated that artificial intelligence (AI) models tend to hallucinate less frequently than humans do. This remark was made during the company’s first-ever Code With Claude event, held on Thursday. At this event, the San Francisco-based AI firm unveiled two new versions of its Claude 4 models, alongside several upgraded features such as enhanced memory and better tool integration. Amodei also commented on the skepticism surrounding AI development, suggesting that despite critics searching for obstacles, no significant barriers to AI progress have emerged so far.

During a press briefing reported by TechCrunch, Amodei elaborated on the nature of hallucinations in AI systems, explaining that these errors do not prevent AI from achieving artificial general intelligence (AGI). When asked about hallucinations, he said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.” This perspective highlights that while AI does make mistakes, the frequency might be lower than commonly assumed, though the mistakes can sometimes be unexpected.

Amodei also pointed out that errors are a common part of human activity, with TV presenters, politicians, and professionals making mistakes regularly. Therefore, the presence of errors in AI responses does not necessarily undermine its overall intelligence. Nonetheless, he acknowledged the issue of AI confidently presenting false information remains a challenge. A recent incident highlighted this when Anthropic’s lawyer had to apologize in court after the company’s Claude chatbot generated an incorrect citation in a legal filing. This mishap took place during Anthropic’s ongoing lawsuit against music publishers over alleged copyright violations related to hundreds of song lyrics.

Looking ahead, Amodei remains optimistic about the future of AI. In a paper published in October 2024, he claimed that Anthropic could achieve artificial general intelligence as soon as next year. AGI represents a breakthrough form of AI capable of understanding, learning, and performing a broad spectrum of tasks autonomously, without human assistance. If realized, this development would mark a significant milestone in AI research and its practical applications.