Google and Character.AI Must Face Lawsuit Over Teen Suicide, U.S. Judge Rules
Google and AI startup Character.AI must face a lawsuit brought by a Florida mother who alleges that a chatbot interaction led to her 14-year-old son’s suicide, a U.S. federal judge ruled on Wednesday.
U.S. District Judge Anne Conway rejected the companies’ efforts to dismiss the case, stating they had failed to prove at this early stage that free speech protections shield them from liability. The decision allows one of the first U.S. lawsuits targeting an AI company for alleged psychological harm to move forward.
“This historic decision sets a new precedent for legal accountability across the AI and tech ecosystem,” said Meetali Jain, attorney for plaintiff Megan Garcia.
Background: The Case
-
Garcia’s son, Sewell Setzer, died by suicide in February 2024.
-
The lawsuit alleges that he had become deeply obsessed with an AI chatbot created by Character.AI, which represented itself as a real person, a licensed therapist, and an adult romantic partner.
-
The complaint cites one chilling interaction where Setzer told a chatbot imitating “Daenerys Targaryen” from Game of Thrones that he would “come home right now,” shortly before taking his own life.
Legal and Corporate Response
-
Character.AI argued its chatbots were protected by the First Amendment, and that it had built-in safety features to block conversations around self-harm.
-
Google, which was also named in the suit, argued it should not be held liable, saying it “did not create, design, or manage” the Character.AI app. A spokesperson emphasized that Google and Character.AI are entirely separate entities.
-
However, the court noted that Google had licensed Character.AI’s technology and re-hired the startup’s founders, a fact the plaintiffs cite in arguing Google’s involvement as a co-creator.
Judge Conway dismissed the free speech argument, saying the companies failed to explain “why words strung together by an LLM (large language model) are speech” under constitutional protections. She also denied Google’s request to be cleared of aiding in any alleged misconduct by Character.AI.
What This Means
This ruling opens the door for a landmark case examining:
-
The legal accountability of AI firms for harm caused by chatbot interactions
-
The limits of free speech when applied to AI-generated content
-
Tech platform liability for emerging technologies not fully governed by existing law
With rapidly expanding deployment of LLM-powered chatbots, particularly among youth, this lawsuit is likely to set important legal precedents for AI safety, responsibility, and regulatory oversight in the U.S. and beyond.

