Anthropic Reaches $1.5B Settlement With Authors Over AI Training

Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit from authors who accused the company of using pirated books to train its AI chatbot Claude, according to a filing in San Francisco federal court on Friday. The settlement, which still requires judicial approval, is being described by plaintiffs as the largest copyright recovery in history and the first major resolution of its kind in the AI era.

Under the deal, Anthropic will destroy downloaded copies of more than 7 million pirated books stored in a central library and establish a $1.5 billion fund—equivalent to about $3,000 per 500,000 downloaded works, though the amount could rise if more books are identified. While the settlement ends claims over the copying of works for training, it leaves open potential future lawsuits regarding AI-generated outputs.

The lawsuit, filed last year by authors including Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, alleged that Anthropic—backed by Amazon and Alphabet—unlawfully scraped millions of books from pirate sites to build Claude’s training dataset.

Judge William Alsup previously ruled that Anthropic’s use of the works for model training qualified as fair use, but storing the pirated material in a central database violated copyright law. A trial scheduled for December could have exposed Anthropic to damages in the hundreds of billions of dollars.

Author advocates hailed the agreement. The Authors Guild’s CEO, Mary Rasenberger, called it “a vital step in acknowledging that AI companies cannot simply steal authors’ creative work to build their AI.”

The case is a watershed moment in the ongoing legal battles between AI developers and copyright holders, with other high-profile cases against OpenAI, Microsoft, and Meta still pending. Courts remain divided on whether training AI on copyrighted content constitutes fair use, ensuring the debate is far from over.