Yazılar

Conservative Activist Robby Starbuck Sues Google Over Defamatory AI ‘Hallucinations’

Conservative activist Robby Starbuck has filed a lawsuit against Google, accusing the company’s artificial intelligence systems of generating and spreading false and defamatory claims about him, including labeling him a “child rapist,” “serial sexual abuser,” and “shooter.”

The complaint, filed in Delaware state court, alleges that Google’s Bard and Gemma chatbots produced fabricated statements that reached millions of users, citing non-existent sources and failing to correct errors after being notified. Starbuck is seeking at least $15 million in damages.

A Google spokesperson, Jose Castaneda, acknowledged that the allegations stem from AI “hallucinations” — a known issue with large language models (LLMs) where systems generate false or misleading information. “We disclose this issue and work hard to minimize it,” Castaneda said. “But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

Starbuck, a vocal critic of diversity, equity, and inclusion (DEI) policies, said the false claims have caused reputational damage and personal safety risks. “No one — regardless of political beliefs — should ever experience this,” he said. “We must demand transparent, unbiased AI that cannot be weaponized to harm people.”

The lawsuit details how, in December 2023, Bard falsely linked Starbuck to white nationalist Richard Spencer using fabricated citations. Later, Google’s Gemma chatbot allegedly repeated similar falsehoods, accusing Starbuck of spousal abuse, participation in the January 6 riots, and even appearing in Jeffrey Epstein’s files.

Starbuck said these false claims have led to harassment and threats, citing the recent assassination of conservative activist Charlie Kirk as evidence of escalating risks for public figures.

This is not Starbuck’s first legal battle with Big Tech. He previously sued Meta Platforms over similar AI-generated falsehoods earlier this year; the two parties settled in August, and Starbuck has since advised Meta on AI ethics and accuracy.

The case highlights growing concerns over AI defamation risks and the legal responsibilities of tech companies deploying generative models capable of producing false, reputationally damaging statements.

Anthropic CEO Dario Amodei Claims AI Models Experience Fewer Hallucinations Than Humans: Report

Anthropic CEO Dario Amodei recently stated that artificial intelligence (AI) models tend to hallucinate less frequently than humans do. This remark was made during the company’s first-ever Code With Claude event, held on Thursday. At this event, the San Francisco-based AI firm unveiled two new versions of its Claude 4 models, alongside several upgraded features such as enhanced memory and better tool integration. Amodei also commented on the skepticism surrounding AI development, suggesting that despite critics searching for obstacles, no significant barriers to AI progress have emerged so far.

During a press briefing reported by TechCrunch, Amodei elaborated on the nature of hallucinations in AI systems, explaining that these errors do not prevent AI from achieving artificial general intelligence (AGI). When asked about hallucinations, he said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.” This perspective highlights that while AI does make mistakes, the frequency might be lower than commonly assumed, though the mistakes can sometimes be unexpected.

Amodei also pointed out that errors are a common part of human activity, with TV presenters, politicians, and professionals making mistakes regularly. Therefore, the presence of errors in AI responses does not necessarily undermine its overall intelligence. Nonetheless, he acknowledged the issue of AI confidently presenting false information remains a challenge. A recent incident highlighted this when Anthropic’s lawyer had to apologize in court after the company’s Claude chatbot generated an incorrect citation in a legal filing. This mishap took place during Anthropic’s ongoing lawsuit against music publishers over alleged copyright violations related to hundreds of song lyrics.

Looking ahead, Amodei remains optimistic about the future of AI. In a paper published in October 2024, he claimed that Anthropic could achieve artificial general intelligence as soon as next year. AGI represents a breakthrough form of AI capable of understanding, learning, and performing a broad spectrum of tasks autonomously, without human assistance. If realized, this development would mark a significant milestone in AI research and its practical applications.