Yazılar

Anthropic CEO Criticizes Proposed 10-Year Ban on State AI Regulation as ‘Too Blunt’

Dario Amodei, CEO of Anthropic, argued in a New York Times opinion piece that a Republican proposal to block states from regulating artificial intelligence for 10 years is an overly blunt approach. Instead, he called for a coordinated federal effort by the White House and Congress to establish transparency standards for AI companies.

Amodei warned that a decade-long moratorium on state regulations would leave a regulatory gap with “no ability for states to act, and no national policy as a backstop,” especially given how rapidly AI technology is advancing.

The proposed ban, included in former President Donald Trump’s tax cut bill, seeks to preempt recent AI laws passed in several states. However, it has faced pushback from a bipartisan coalition of attorneys general who support state-level oversight of high-risk AI applications.

Amodei recommended a federal transparency standard requiring AI developers to implement rigorous testing and evaluation policies, disclose risk mitigation plans, and publicly share how they ensure the safety of their models before release.

He noted that Anthropic, supported by Amazon, already publishes such transparency reports, and competitors like OpenAI and Google DeepMind have adopted similar practices. Amodei suggested that legislation might be necessary to maintain transparency as AI models grow more powerful and corporate incentives to disclose risks may wane.

Reddit Sues AI Firm Anthropic for Alleged Unauthorized Use of Data

Reddit has filed a lawsuit against artificial intelligence startup Anthropic, accusing it of illegally using Reddit’s content to train its AI models without permission or a licensing agreement. The suit was filed Wednesday in San Francisco Superior Court, marking the latest legal clash over AI companies’ use of third-party online content.

In the complaint, Reddit alleges that Anthropic has scraped and exploited data from the platform over 100,000 times, despite publicly claiming last year that it had blocked its bots from accessing Reddit. According to Reddit, Anthropic’s Claude chatbot even acknowledged it was trained on at least some Reddit data, but could not confirm whether deleted content had been included.

“Anthropic refuses to respect Reddit’s guardrails and enter into a license agreement,” the complaint says, contrasting the company’s stance with that of Google and OpenAI, both of which have entered licensing arrangements with Reddit.

Reddit claims Anthropic’s actions violate its user policies and have allowed the startup to enrich itself by “tens of billions of dollars.” The lawsuit seeks unspecified restitution, punitive damages, and an injunction to stop Anthropic from further using Reddit content for commercial purposes.

Anthropic Responds

An Anthropic spokesperson said the company disagrees with Reddit’s claims and intends to defend itself vigorously. The lawsuit adds further scrutiny to Anthropic, whose backers include tech giants Amazon and Alphabet (Google).

Anthropic recently launched its latest Claude models, Opus 4 and Sonnet 4, on May 22, and has reportedly reached $3 billion in annualized revenue, according to sources familiar with the matter.

Growing Legal Tensions Over AI Training Data

This legal dispute highlights a broader industry-wide debate over how AI companies source and utilize data to train large language models. Many websites and publishers argue that AI firms are profiting from content without compensating the creators, while AI companies contend that publicly available internet data falls under fair use.

In a statement, Reddit Chief Legal Officer Ben Lee emphasized the platform’s support for an open internet but said AI companies need “clear limitations” when it comes to scraping and monetizing content.

Both companies are headquartered in San Francisco, located just a few blocks apart.

The case has been filed under Reddit Inc v Anthropic PBC, California Superior Court, San Francisco County, No. CGC-25-524892.

Anthropic CEO Dario Amodei Claims AI Models Experience Fewer Hallucinations Than Humans: Report

Anthropic CEO Dario Amodei recently stated that artificial intelligence (AI) models tend to hallucinate less frequently than humans do. This remark was made during the company’s first-ever Code With Claude event, held on Thursday. At this event, the San Francisco-based AI firm unveiled two new versions of its Claude 4 models, alongside several upgraded features such as enhanced memory and better tool integration. Amodei also commented on the skepticism surrounding AI development, suggesting that despite critics searching for obstacles, no significant barriers to AI progress have emerged so far.

During a press briefing reported by TechCrunch, Amodei elaborated on the nature of hallucinations in AI systems, explaining that these errors do not prevent AI from achieving artificial general intelligence (AGI). When asked about hallucinations, he said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.” This perspective highlights that while AI does make mistakes, the frequency might be lower than commonly assumed, though the mistakes can sometimes be unexpected.

Amodei also pointed out that errors are a common part of human activity, with TV presenters, politicians, and professionals making mistakes regularly. Therefore, the presence of errors in AI responses does not necessarily undermine its overall intelligence. Nonetheless, he acknowledged the issue of AI confidently presenting false information remains a challenge. A recent incident highlighted this when Anthropic’s lawyer had to apologize in court after the company’s Claude chatbot generated an incorrect citation in a legal filing. This mishap took place during Anthropic’s ongoing lawsuit against music publishers over alleged copyright violations related to hundreds of song lyrics.

Looking ahead, Amodei remains optimistic about the future of AI. In a paper published in October 2024, he claimed that Anthropic could achieve artificial general intelligence as soon as next year. AGI represents a breakthrough form of AI capable of understanding, learning, and performing a broad spectrum of tasks autonomously, without human assistance. If realized, this development would mark a significant milestone in AI research and its practical applications.