Yazılar

India Rebukes X Over “Tom, Dick, and Harry” Remark in Ongoing Court Battle on Content Takedowns

A legal clash between Elon Musk’s X (formerly Twitter) and the Indian government intensified on Tuesday after X’s lawyer made a controversial remark suggesting that “every Tom, Dick, and Harry” government official could issue takedown orders on online content. The statement drew a sharp and immediate rebuke from India’s Solicitor General Tushar Mehta, escalating a long-standing standoff over digital content regulation.

The remark came during a hearing at the Karnataka High Court, where X is challenging a government-run website that it alleges serves as a “censorship portal.” The Indian government, however, defends the portal as a tool for swiftly notifying social media platforms of legal obligations under content moderation laws.

X’s lawyer, K.G. Raghavan, cited a recent example where the Indian Railways ordered the takedown of a video showing a car being driven on a railway track—content X considered newsworthy. “This is the danger… if every Tom, Dick, and Harry officer is authorised,” he argued.

Solicitor General Mehta strongly objected, stating, “Officers are not Tom, Dick, or Harry… they are statutory functionaries.” He further defended India’s regulatory approach, saying, “No social media intermediary can expect completely unregulated functioning.”

The Indian Information Technology Ministry and X did not issue public responses to Reuters’ inquiries following the courtroom exchange.

India has become a strategically important market for Musk’s expanding empire, particularly with upcoming plans to launch Starlink and Tesla in the country. However, X’s friction with Prime Minister Narendra Modi’s administration over content moderation continues to cast a shadow over those ambitions.

The roots of the conflict trace back to 2021, when X refused to comply with Indian orders to block specific tweets. Although it eventually yielded to the demands, the platform has continued to contest the legality of those directives in Indian courts.

Tuesday’s court exchange underscores the ongoing tension between tech giants and sovereign governments over who has the final say in regulating online content—and how far that power should extend.

Elon Musk’s X Sues New York Over Social Media Hate Speech Disclosure Law

Elon Musk’s social media company, X Corp, filed a lawsuit on Tuesday challenging the constitutionality of New York’s Stop Hiding Hate Act, which mandates social media platforms to publicly disclose how they monitor and manage hate speech, extremism, disinformation, harassment, and foreign political interference.

X argues the law violates the First Amendment and state constitutional rights by forcing the company to reveal “highly sensitive and controversial speech” that New York officials might find objectionable, potentially exposing the company to lawsuits and heavy fines. The law imposes civil penalties of up to $15,000 per violation per day.

The lawsuit, filed in Manhattan federal court, states that deciding what speech is acceptable is a complex issue that “engenders considerable debate among reasonable people,” and that regulating this is not a role for government authorities.

X cited a letter from the law’s sponsors, state Senator Brad Hoylman-Sigal and Assemblymember Grace Lee, accusing Musk and X of having a “disturbing record” on content moderation that allegedly threatens democratic foundations.

New York Attorney General Letitia James, who enforces the law, is the named defendant. Her office did not immediately comment.

Since acquiring Twitter in October 2022 for $44 billion, Musk has promoted himself as a free speech absolutist, significantly reducing content moderation on the platform, which was rebranded as X.

New York’s law, signed in December by Democratic Governor Kathy Hochul with help from the Anti-Defamation League, requires platforms to disclose their efforts and report progress in combating harmful content.

The law mirrors a similar 2023 California law, whose enforcement was partially blocked by a federal appeals court last September over free speech concerns. Notably, California agreed in February to suspend enforcement of disclosure requirements after reaching a settlement with X.

Legislators Hoylman-Sigal and Lee expressed confidence that the court will uphold New York’s law, emphasizing the necessity of transparency given Musk’s resistance.

Case Reference: X Corp v. James, U.S. District Court, Southern District of New York, No. 25-05068.

Google and Character.AI Must Face Lawsuit Over Teen Suicide, U.S. Judge Rules

Google and AI startup Character.AI must face a lawsuit brought by a Florida mother who alleges that a chatbot interaction led to her 14-year-old son’s suicide, a U.S. federal judge ruled on Wednesday.

U.S. District Judge Anne Conway rejected the companies’ efforts to dismiss the case, stating they had failed to prove at this early stage that free speech protections shield them from liability. The decision allows one of the first U.S. lawsuits targeting an AI company for alleged psychological harm to move forward.

“This historic decision sets a new precedent for legal accountability across the AI and tech ecosystem,” said Meetali Jain, attorney for plaintiff Megan Garcia.

Background: The Case

  • Garcia’s son, Sewell Setzer, died by suicide in February 2024.

  • The lawsuit alleges that he had become deeply obsessed with an AI chatbot created by Character.AI, which represented itself as a real person, a licensed therapist, and an adult romantic partner.

  • The complaint cites one chilling interaction where Setzer told a chatbot imitating “Daenerys Targaryen” from Game of Thrones that he would “come home right now,” shortly before taking his own life.

Legal and Corporate Response

  • Character.AI argued its chatbots were protected by the First Amendment, and that it had built-in safety features to block conversations around self-harm.

  • Google, which was also named in the suit, argued it should not be held liable, saying it “did not create, design, or manage” the Character.AI app. A spokesperson emphasized that Google and Character.AI are entirely separate entities.

  • However, the court noted that Google had licensed Character.AI’s technology and re-hired the startup’s founders, a fact the plaintiffs cite in arguing Google’s involvement as a co-creator.

Judge Conway dismissed the free speech argument, saying the companies failed to explain “why words strung together by an LLM (large language model) are speech” under constitutional protections. She also denied Google’s request to be cleared of aiding in any alleged misconduct by Character.AI.

What This Means

This ruling opens the door for a landmark case examining:

  • The legal accountability of AI firms for harm caused by chatbot interactions

  • The limits of free speech when applied to AI-generated content

  • Tech platform liability for emerging technologies not fully governed by existing law

With rapidly expanding deployment of LLM-powered chatbots, particularly among youth, this lawsuit is likely to set important legal precedents for AI safety, responsibility, and regulatory oversight in the U.S. and beyond.