Conservative Activist Robby Starbuck Sues Google Over Defamatory AI ‘Hallucinations’

Conservative activist Robby Starbuck has filed a lawsuit against Google, accusing the company’s artificial intelligence systems of generating and spreading false and defamatory claims about him, including labeling him a “child rapist,” “serial sexual abuser,” and “shooter.”

The complaint, filed in Delaware state court, alleges that Google’s Bard and Gemma chatbots produced fabricated statements that reached millions of users, citing non-existent sources and failing to correct errors after being notified. Starbuck is seeking at least $15 million in damages.

A Google spokesperson, Jose Castaneda, acknowledged that the allegations stem from AI “hallucinations” — a known issue with large language models (LLMs) where systems generate false or misleading information. “We disclose this issue and work hard to minimize it,” Castaneda said. “But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

Starbuck, a vocal critic of diversity, equity, and inclusion (DEI) policies, said the false claims have caused reputational damage and personal safety risks. “No one — regardless of political beliefs — should ever experience this,” he said. “We must demand transparent, unbiased AI that cannot be weaponized to harm people.”

The lawsuit details how, in December 2023, Bard falsely linked Starbuck to white nationalist Richard Spencer using fabricated citations. Later, Google’s Gemma chatbot allegedly repeated similar falsehoods, accusing Starbuck of spousal abuse, participation in the January 6 riots, and even appearing in Jeffrey Epstein’s files.

Starbuck said these false claims have led to harassment and threats, citing the recent assassination of conservative activist Charlie Kirk as evidence of escalating risks for public figures.

This is not Starbuck’s first legal battle with Big Tech. He previously sued Meta Platforms over similar AI-generated falsehoods earlier this year; the two parties settled in August, and Starbuck has since advised Meta on AI ethics and accuracy.

The case highlights growing concerns over AI defamation risks and the legal responsibilities of tech companies deploying generative models capable of producing false, reputationally damaging statements.