Yazılar

Conservative Activist Robby Starbuck Sues Google Over Defamatory AI ‘Hallucinations’

Conservative activist Robby Starbuck has filed a lawsuit against Google, accusing the company’s artificial intelligence systems of generating and spreading false and defamatory claims about him, including labeling him a “child rapist,” “serial sexual abuser,” and “shooter.”

The complaint, filed in Delaware state court, alleges that Google’s Bard and Gemma chatbots produced fabricated statements that reached millions of users, citing non-existent sources and failing to correct errors after being notified. Starbuck is seeking at least $15 million in damages.

A Google spokesperson, Jose Castaneda, acknowledged that the allegations stem from AI “hallucinations” — a known issue with large language models (LLMs) where systems generate false or misleading information. “We disclose this issue and work hard to minimize it,” Castaneda said. “But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

Starbuck, a vocal critic of diversity, equity, and inclusion (DEI) policies, said the false claims have caused reputational damage and personal safety risks. “No one — regardless of political beliefs — should ever experience this,” he said. “We must demand transparent, unbiased AI that cannot be weaponized to harm people.”

The lawsuit details how, in December 2023, Bard falsely linked Starbuck to white nationalist Richard Spencer using fabricated citations. Later, Google’s Gemma chatbot allegedly repeated similar falsehoods, accusing Starbuck of spousal abuse, participation in the January 6 riots, and even appearing in Jeffrey Epstein’s files.

Starbuck said these false claims have led to harassment and threats, citing the recent assassination of conservative activist Charlie Kirk as evidence of escalating risks for public figures.

This is not Starbuck’s first legal battle with Big Tech. He previously sued Meta Platforms over similar AI-generated falsehoods earlier this year; the two parties settled in August, and Starbuck has since advised Meta on AI ethics and accuracy.

The case highlights growing concerns over AI defamation risks and the legal responsibilities of tech companies deploying generative models capable of producing false, reputationally damaging statements.

Belgium Considers Power Limits for Data Centres Amid AI-Driven Energy Surge

Belgium’s electricity grid operator Elia is weighing plans to introduce energy allocation limits for data centres, as a wave of AI-fueled demand threatens to strain the country’s power network and crowd out other industries.

Under the proposal, Elia would place data centres in a separate consumption category, giving them a fixed share of grid capacity. The move aims to prevent high-energy facilities from monopolising the grid while still allowing flexible connections that could be curtailed during peak demand or congestion.

The proposal comes as the global race to build AI data infrastructure drives electricity demand to unprecedented levels. In Belgium alone, requests from data centre operators have surged ninefold since 2022, Elia told Reuters. Reserved capacity for 2034 already exceeds twice the 8 terawatt-hours projected in national grid development plans.

“These volumes were not anticipated when Belgium’s grid scenarios were designed,” Elia said, warning that speculative projects risk blocking capacity for other sectors if left unchecked.

The issue will be addressed in Belgium’s next federal grid development plan (2028–2038), Energy Minister Mathieu Bihet told parliament this week. “I will pay particular attention to this during the plan’s approval,” he said.

Belgium’s debate reflects a broader European challenge: balancing energy-intensive AI operations with industrial and environmental goals. Data centres—essential for AI model training and cloud computing—are rapidly becoming one of Europe’s largest sources of new electricity demand.

Tech giants such as Google are already ramping up investment. The U.S. company plans to spend €5 billion ($5.8 billion) expanding its Belgian data centre campuses as part of its global AI strategy.

If approved, Elia’s proposal could make Belgium one of the first European nations to formally cap grid access for AI infrastructure—signalling a shift toward tighter energy governance in the digital age.

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.