Yazılar

Block Wins Dismissal of Shareholder Lawsuit Over 2021 Cash App Breach

Block (XYZ.N), the fintech company led by Jack Dorsey, has defeated a shareholder lawsuit tied to a 2021 Cash App data breach that exposed information from about 8.2 million users.

The Case

  • Shareholders accused Block of:

    • Inflating its stock price by failing to disclose weak data security before the breach.

    • Delaying disclosure until April 2022, nearly four months after the incident.

    • Misleading Afterpay shareholders ahead of its $29 billion acquisition of the BNPL firm in January 2022.

Court’s Ruling

  • U.S. District Judge Margaret Garnett in Manhattan dismissed the case.

  • She ruled there was no evidence Block intended to defraud investors.

  • General statements about data security risks were not guarantees of system safety.

  • Shareholders also failed to prove:

    • A unique link between alleged misstatements and the Afterpay deal.

    • That Block executives had a specific motive or benefit from the alleged omissions.

Context

  • Block has faced regulatory pressure over Cash App:

    • $80M settlement with 48 U.S. state regulators (Jan 2024).

    • $40M settlement with New York (Apr 2024).

  • Despite these issues, Cash App processed $283B in inflows in 2024 and had 57M monthly active users by year-end.

What’s Next

  • The case (In re Block Inc Securities Litigation, No. 22-08636) is now dismissed, though investors could still pursue an appeal.

  • For Block, the ruling removes a major legal overhang as it continues to scale Cash App and integrate Afterpay.

OpenAI Appeals Court Order on Data Preservation in NYT Copyright Lawsuit

OpenAI has appealed a recent court order requiring it to indefinitely preserve ChatGPT output data in an ongoing copyright lawsuit filed by The New York Times (NYT). The company argues that the order conflicts with its obligations to protect user privacy.

Last month, the court mandated that OpenAI must preserve and segregate all output log data, after the NYT requested this as part of the discovery process. In response, OpenAI filed a motion on June 3 to vacate the data preservation order, according to a court filing.

OpenAI CEO Sam Altman publicly criticized the order on X, stating, “We will fight any demand that compromises our users’ privacy; this is a core principle.” He added that the NYT’s request was “inappropriate” and “sets a bad precedent.”

The lawsuit, originally filed in 2023, accuses OpenAI and its partner Microsoft of using millions of NYT articles without permission to train their language models, including the one powering ChatGPT. The Times alleges that this constitutes copyright infringement.

U.S. District Judge Sidney Stein previously ruled that the Times had made a plausible case that OpenAI and Microsoft may have induced users to infringe on its copyrights. In an earlier opinion, the judge allowed the case to proceed, citing numerous and widely publicized instances where ChatGPT reproduced substantial portions of Times content.

While the NYT declined to comment on OpenAI’s appeal, the case remains one of the highest-profile legal challenges facing generative AI companies over training data use and copyright infringement claims.

Google and Character.AI Must Face Lawsuit Over Teen Suicide, U.S. Judge Rules

Google and AI startup Character.AI must face a lawsuit brought by a Florida mother who alleges that a chatbot interaction led to her 14-year-old son’s suicide, a U.S. federal judge ruled on Wednesday.

U.S. District Judge Anne Conway rejected the companies’ efforts to dismiss the case, stating they had failed to prove at this early stage that free speech protections shield them from liability. The decision allows one of the first U.S. lawsuits targeting an AI company for alleged psychological harm to move forward.

“This historic decision sets a new precedent for legal accountability across the AI and tech ecosystem,” said Meetali Jain, attorney for plaintiff Megan Garcia.

Background: The Case

  • Garcia’s son, Sewell Setzer, died by suicide in February 2024.

  • The lawsuit alleges that he had become deeply obsessed with an AI chatbot created by Character.AI, which represented itself as a real person, a licensed therapist, and an adult romantic partner.

  • The complaint cites one chilling interaction where Setzer told a chatbot imitating “Daenerys Targaryen” from Game of Thrones that he would “come home right now,” shortly before taking his own life.

Legal and Corporate Response

  • Character.AI argued its chatbots were protected by the First Amendment, and that it had built-in safety features to block conversations around self-harm.

  • Google, which was also named in the suit, argued it should not be held liable, saying it “did not create, design, or manage” the Character.AI app. A spokesperson emphasized that Google and Character.AI are entirely separate entities.

  • However, the court noted that Google had licensed Character.AI’s technology and re-hired the startup’s founders, a fact the plaintiffs cite in arguing Google’s involvement as a co-creator.

Judge Conway dismissed the free speech argument, saying the companies failed to explain “why words strung together by an LLM (large language model) are speech” under constitutional protections. She also denied Google’s request to be cleared of aiding in any alleged misconduct by Character.AI.

What This Means

This ruling opens the door for a landmark case examining:

  • The legal accountability of AI firms for harm caused by chatbot interactions

  • The limits of free speech when applied to AI-generated content

  • Tech platform liability for emerging technologies not fully governed by existing law

With rapidly expanding deployment of LLM-powered chatbots, particularly among youth, this lawsuit is likely to set important legal precedents for AI safety, responsibility, and regulatory oversight in the U.S. and beyond.