Yazılar

Anthropic Reaches $1.5B Settlement With Authors Over AI Training

Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit from authors who accused the company of using pirated books to train its AI chatbot Claude, according to a filing in San Francisco federal court on Friday. The settlement, which still requires judicial approval, is being described by plaintiffs as the largest copyright recovery in history and the first major resolution of its kind in the AI era.

Under the deal, Anthropic will destroy downloaded copies of more than 7 million pirated books stored in a central library and establish a $1.5 billion fund—equivalent to about $3,000 per 500,000 downloaded works, though the amount could rise if more books are identified. While the settlement ends claims over the copying of works for training, it leaves open potential future lawsuits regarding AI-generated outputs.

The lawsuit, filed last year by authors including Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, alleged that Anthropic—backed by Amazon and Alphabet—unlawfully scraped millions of books from pirate sites to build Claude’s training dataset.

Judge William Alsup previously ruled that Anthropic’s use of the works for model training qualified as fair use, but storing the pirated material in a central database violated copyright law. A trial scheduled for December could have exposed Anthropic to damages in the hundreds of billions of dollars.

Author advocates hailed the agreement. The Authors Guild’s CEO, Mary Rasenberger, called it “a vital step in acknowledging that AI companies cannot simply steal authors’ creative work to build their AI.”

The case is a watershed moment in the ongoing legal battles between AI developers and copyright holders, with other high-profile cases against OpenAI, Microsoft, and Meta still pending. Courts remain divided on whether training AI on copyrighted content constitutes fair use, ensuring the debate is far from over.

Anthropic Offers Claude AI Chatbot to U.S. Government for $1

Anthropic, the AI startup backed by Amazon.com, announced it will offer its Claude AI model to the U.S. government for just $1. The move makes Claude one of several AI tools available to federal agencies at minimal cost, as startups compete for lucrative government contracts.

This announcement follows the inclusion of Claude, OpenAI’s ChatGPT, and Google’s Gemini on the government’s list of approved AI vendors. CEO Dario Amodei stated, “America’s AI leadership requires that our government institutions have access to the most capable, secure AI tools available.”

OpenAI made a similar offer last week, providing ChatGPT Enterprise to participating federal agencies for $1 per agency for the next year. Anthropic’s initiative highlights the growing competition among AI companies to provide secure and advanced technologies to the U.S. government.

U.S. Agency Approves OpenAI, Google, and Anthropic for Federal AI Vendor List

The U.S. General Services Administration (GSA) has approved OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude as official AI vendors for federal agencies, the agency announced Tuesday. This move supports the Trump administration’s push to expand AI adoption across government sectors.

The approvals come as part of a broader AI blueprint released on July 23, which seeks to ease environmental regulations and increase AI exports to allied countries to help the U.S. maintain its technological edge over China.

With the GSA’s approval, these AI tools will be accessible to federal agencies through a platform that streamlines contracts and usage terms. The agency emphasized that it prioritizes AI models that ensure truthfulness, accuracy, transparency, and freedom from ideological bias.

President Donald Trump has described the AI race as a defining challenge of the 21st century. His administration’s AI plan includes around 90 recommendations focused on promoting U.S. AI software and hardware exports, while rolling back state laws seen as restrictive to AI innovation.

This approach contrasts sharply with the Biden administration’s “high fence” policies, which placed more stringent safeguards on AI use within federal agencies, including monitoring and assessing AI’s impact on the public. Biden also signed an executive order aimed at fostering competition, protecting consumers, and combating misinformation—measures that were later rescinded by Trump.