Yazılar

Getty Images Defends Copyright Lawsuit Against Stability AI, Says It Won’t Harm AI Industry

Getty Images’ landmark UK copyright lawsuit against Stability AI kicked off at London’s High Court on Monday, with Getty firmly rejecting Stability AI’s claim that the case threatens the broader generative AI sector.

The Seattle-based visual content company alleges that Stability AI unlawfully scraped millions of Getty’s images to train its Stable Diffusion system, which generates images from text prompts. Getty has also filed a parallel lawsuit against Stability AI in the United States.

Stability AI, backed by hundreds of millions in funding and a recent investment from advertising giant WPP, denies infringing Getty’s rights. A spokesperson emphasized that the case concerns “technological innovation and freedom of ideas,” arguing that their tools enable artists to build on collective human knowledge—a core aspect of fair use and freedom of expression.

However, Stability AI’s lawyer described Getty’s lawsuit as “an overt threat” to both Stability AI’s business and the wider AI industry.

Getty’s legal team countered that their case centers on protecting intellectual property, not hindering AI development. Lawyer Lindsay Lane told the court, “It is not a battle between creatives and technology… copyright and database rights are critical to AI’s advancement. The issue arises when AI companies use protected works without payment.”

This case is among several global lawsuits addressing the use of copyrighted material to train AI models since the rise of generative AI tools like ChatGPT. The creative sector is actively debating the legal and ethical implications, with notable artists calling for stronger protections.

Legal experts say the outcome will be pivotal in defining copyright’s role in AI, potentially influencing future government policy. Rebecca Newman, a UK lawyer not involved in the case, said, “We’re in uncharted legal territory… this case will set important boundaries on copyright monopolies in the AI era.” Similarly, Cerys Wyn Davies noted the ruling could significantly impact market practices and the UK’s appeal for AI development.

UK Judge Warns Lawyers Against Using AI to Cite Fake Cases, Threatens Sanctions

London’s High Court issued a stern warning on Friday that lawyers who rely on artificial intelligence to cite fabricated or non-existent legal cases risk being held in contempt of court or facing criminal charges. The caution comes amid growing concerns about generative AI tools, such as ChatGPT, leading legal professionals astray.

Judge Victoria Sharp condemned lawyers in two recent cases who used AI-generated arguments containing fake case law. She urged legal regulators and industry leaders to take stronger actions to ensure lawyers understand their ethical duties regarding AI use.

“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” Judge Sharp said in her written ruling. She stressed the need for practical, effective measures from those responsible for legal regulation and leadership within the profession.

Since generative AI tools became widely accessible over the past two years, lawyers globally have faced scrutiny for referencing false authorities in court. Sharp emphasized that lawyers who cite non-existent cases breach their duty not to mislead courts, which can amount to contempt of court.

In the most severe instances, deliberately submitting false information with intent to disrupt justice could constitute the criminal offence of perverting the course of justice, she warned.

While legal regulators and the judiciary have issued guidance on AI use by lawyers, Judge Sharp said guidance alone is insufficient to curb misuse and called for stronger enforcement and leadership.

Anthropic CEO Criticizes Proposed 10-Year Ban on State AI Regulation as ‘Too Blunt’

Dario Amodei, CEO of Anthropic, argued in a New York Times opinion piece that a Republican proposal to block states from regulating artificial intelligence for 10 years is an overly blunt approach. Instead, he called for a coordinated federal effort by the White House and Congress to establish transparency standards for AI companies.

Amodei warned that a decade-long moratorium on state regulations would leave a regulatory gap with “no ability for states to act, and no national policy as a backstop,” especially given how rapidly AI technology is advancing.

The proposed ban, included in former President Donald Trump’s tax cut bill, seeks to preempt recent AI laws passed in several states. However, it has faced pushback from a bipartisan coalition of attorneys general who support state-level oversight of high-risk AI applications.

Amodei recommended a federal transparency standard requiring AI developers to implement rigorous testing and evaluation policies, disclose risk mitigation plans, and publicly share how they ensure the safety of their models before release.

He noted that Anthropic, supported by Amazon, already publishes such transparency reports, and competitors like OpenAI and Google DeepMind have adopted similar practices. Amodei suggested that legislation might be necessary to maintain transparency as AI models grow more powerful and corporate incentives to disclose risks may wane.