Yazılar

Apple Hit With Lawsuit Over Use of Books in AI Training

Apple was sued Friday in federal court in Northern California by authors who accuse the company of illegally using copyrighted books to train its “OpenELM” large language models. The proposed class action, filed by writers Grady Hendrix and Jennifer Roberson, claims Apple copied protected works without consent, credit, or compensation.

“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” the lawsuit alleges. Neither Apple nor the plaintiffs’ lawyers immediately commented.

The case adds Apple to the growing list of tech giants—Microsoft, Meta, and OpenAI among them—facing litigation over whether training AI on copyrighted material constitutes infringement or fair use. On the same day, Anthropic agreed to a $1.5 billion settlement with authors who accused it of training its Claude chatbot on pirated books, a deal hailed as the largest copyright recovery in history.

According to the lawsuit, Apple’s models were trained on a known dataset of pirated books, allegedly including works by Hendrix and Roberson. The case seeks damages and legal recognition that Apple must compensate authors when their intellectual property is used to build AI systems.

The dispute underscores the escalating clash between AI developers and creators, as courts weigh how copyright law applies to massive datasets powering generative AI. With multiple cases now moving forward in U.S. courts, the outcome could reshape both the AI industry and protections for authors in the digital era.

Anthropic Reaches $1.5B Settlement With Authors Over AI Training

Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit from authors who accused the company of using pirated books to train its AI chatbot Claude, according to a filing in San Francisco federal court on Friday. The settlement, which still requires judicial approval, is being described by plaintiffs as the largest copyright recovery in history and the first major resolution of its kind in the AI era.

Under the deal, Anthropic will destroy downloaded copies of more than 7 million pirated books stored in a central library and establish a $1.5 billion fund—equivalent to about $3,000 per 500,000 downloaded works, though the amount could rise if more books are identified. While the settlement ends claims over the copying of works for training, it leaves open potential future lawsuits regarding AI-generated outputs.

The lawsuit, filed last year by authors including Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, alleged that Anthropic—backed by Amazon and Alphabet—unlawfully scraped millions of books from pirate sites to build Claude’s training dataset.

Judge William Alsup previously ruled that Anthropic’s use of the works for model training qualified as fair use, but storing the pirated material in a central database violated copyright law. A trial scheduled for December could have exposed Anthropic to damages in the hundreds of billions of dollars.

Author advocates hailed the agreement. The Authors Guild’s CEO, Mary Rasenberger, called it “a vital step in acknowledging that AI companies cannot simply steal authors’ creative work to build their AI.”

The case is a watershed moment in the ongoing legal battles between AI developers and copyright holders, with other high-profile cases against OpenAI, Microsoft, and Meta still pending. Courts remain divided on whether training AI on copyrighted content constitutes fair use, ensuring the debate is far from over.

OpenAI’s Cash Burn Projected to Hit $115B by 2029 Amid Chip, Data Center Push

OpenAI has revised its financial outlook sharply upward, projecting it will burn through $115 billion by 2029, according to The Information. The new figure is about $80 billion higher than its earlier estimate, reflecting the surging costs of powering ChatGPT and other AI models.

The report says OpenAI expects to lose over $8 billion in 2024 alone, roughly $1.5 billion more than forecast earlier this year. The company anticipates that annual burn will balloon to $17 billion next year, rising to $35 billion in 2027 and $45 billion in 2028.

To rein in costs, OpenAI is pursuing vertical integration—developing its own AI server chips and data center infrastructure. Its first in-house chip, being developed in partnership with Broadcom, is expected in 2025 and will be used internally. On the infrastructure side, OpenAI has struck major agreements, including:

  • A $4.5 GW data center expansion with Oracle announced in July.

  • The Stargate project, a planned $500 billion, 10 GW buildout backed by SoftBank.

  • Expanded computing capacity through Google Cloud.

The staggering burn rate underscores the immense capital intensity of generative AI, where costs for cloud computing, GPUs, and electricity are skyrocketing. At the same time, it highlights OpenAI’s strategy to reduce reliance on external providers like Nvidia and Amazon Web Services by building a proprietary AI stack—from chips to data centers.