Yazılar

Apple Faces Lawsuit Over Alleged Use of Copyrighted Books to Train Apple Intelligence

Apple has been sued in a California federal court by two neuroscientists who claim the company used pirated versions of their books to train its new artificial intelligence system, Apple Intelligence.

Professors Susana Martinez-Conde and Stephen Macknik of SUNY Downstate Health Sciences University in Brooklyn filed a proposed class-action lawsuit on Thursday, accusing Apple of relying on “shadow libraries” — online repositories of illegally copied books — to build its AI training datasets.

The lawsuit alleges that Apple used thousands of copyrighted works, including the professors’ books Champions of Illusion and Sleights of Mind, without permission. According to the complaint, Apple’s use of such data contributed to a massive surge in its market value the day after Apple Intelligence was unveiled, calling it “the single most lucrative day in the history of the company.”

The case adds Apple to a growing list of major tech firms — including OpenAI, Microsoft, and Meta — facing lawsuits from authors, musicians, and media organizations over the unauthorized use of copyrighted content in AI training. In August, AI firm Anthropic settled a similar case for $1.5 billion.

Apple has not yet commented on the lawsuit. Apple Intelligence, introduced earlier this year, is the company’s suite of AI-powered features for iPhone, iPad, and Mac devices.

OpenAI and Anthropic may use investor funds to settle AI copyright lawsuits – FT

OpenAI and Anthropic are reportedly considering using investor funds to help cover potential multibillion-dollar settlements linked to ongoing AI copyright lawsuits, according to the Financial Times. Several lawsuits filed by authors, publishers, and media companies accuse major tech firms — including OpenAI, Microsoft, and Meta — of using copyrighted materials without permission to train their AI models.

The report said OpenAI has arranged insurance coverage of up to $300 million through Aon for emerging AI-related risks, though some sources claimed the actual figure is “significantly lower.” Experts said such coverage is still far short of the amount needed to offset massive legal liabilities.

Aon’s head of cyber risk, Kevin Kalinich, told the FT that the insurance industry currently lacks “enough capacity” to adequately protect AI model providers. As a result, OpenAI has discussed self-insurance options, including setting up a captive fund to ringfence investor capital against potential future claims.

Anthropic, meanwhile, is also using internal funds to prepare for possible settlements. Last month, a U.S. federal judge preliminarily approved a $1.5 billion class-action settlement involving authors’ copyright claims against Anthropic.

Neither company has publicly commented on the report, and Reuters could not independently verify the details. The cases highlight the growing legal and financial challenges facing leading AI developers as governments and creators push back on data use practices.

Getty Images Defends Copyright Lawsuit Against Stability AI, Says It Won’t Harm AI Industry

Getty Images’ landmark UK copyright lawsuit against Stability AI kicked off at London’s High Court on Monday, with Getty firmly rejecting Stability AI’s claim that the case threatens the broader generative AI sector.

The Seattle-based visual content company alleges that Stability AI unlawfully scraped millions of Getty’s images to train its Stable Diffusion system, which generates images from text prompts. Getty has also filed a parallel lawsuit against Stability AI in the United States.

Stability AI, backed by hundreds of millions in funding and a recent investment from advertising giant WPP, denies infringing Getty’s rights. A spokesperson emphasized that the case concerns “technological innovation and freedom of ideas,” arguing that their tools enable artists to build on collective human knowledge—a core aspect of fair use and freedom of expression.

However, Stability AI’s lawyer described Getty’s lawsuit as “an overt threat” to both Stability AI’s business and the wider AI industry.

Getty’s legal team countered that their case centers on protecting intellectual property, not hindering AI development. Lawyer Lindsay Lane told the court, “It is not a battle between creatives and technology… copyright and database rights are critical to AI’s advancement. The issue arises when AI companies use protected works without payment.”

This case is among several global lawsuits addressing the use of copyrighted material to train AI models since the rise of generative AI tools like ChatGPT. The creative sector is actively debating the legal and ethical implications, with notable artists calling for stronger protections.

Legal experts say the outcome will be pivotal in defining copyright’s role in AI, potentially influencing future government policy. Rebecca Newman, a UK lawyer not involved in the case, said, “We’re in uncharted legal territory… this case will set important boundaries on copyright monopolies in the AI era.” Similarly, Cerys Wyn Davies noted the ruling could significantly impact market practices and the UK’s appeal for AI development.