Nvidia’s New AI Chips Slash Training Times for Massive AI Models

Nvidia’s latest generation of AI chips is making significant advances in training some of the world’s largest artificial intelligence systems, according to new benchmark data released on Wednesday by MLCommons, a nonprofit organization that tracks AI system performance.

The results show a dramatic drop in the number of chips required to train large language models (LLMs), highlighting Nvidia’s growing technological lead in this critical area of AI development. While much of the financial market’s current focus is on the booming sector of AI inference—where AI models answer user queries—training remains a core competitive battleground, especially for developing next-generation models with trillions of parameters.

Blackwell Chips Outperform Previous Generations

Nvidia’s new Blackwell chips demonstrated superior performance over its previous Hopper generation. In tests involving Meta Platforms’ open-source Llama 3.1 405B model, which is complex enough to simulate some of the most demanding AI training workloads, Nvidia’s Blackwell chips completed training tasks with more than double the speed per chip compared to Hopper.

In one benchmark, a system using 2,496 Blackwell chips completed the training run in just 27 minutes. By comparison, even though more than three times as many Hopper chips were used in previous tests, they only achieved faster results due to sheer scale rather than efficiency.

Nvidia and its partners were the only ones to submit data for models of this size, giving Nvidia a clear demonstration of its leadership in training capabilities for multi-trillion parameter models.

Changing Industry Trends in AI Training

Chetan Kapoor, chief product officer of CoreWeave, which collaborated with Nvidia on the results, noted that AI companies are moving away from building vast, homogenous data centers with 100,000 or more identical chips. Instead, they are increasingly assembling smaller, specialized subsystems that handle different aspects of the training process. This modular approach allows companies to speed up training times and manage extremely large model sizes more efficiently.

“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor explained at a press briefing.

Global Competition Also Heating Up

While Nvidia maintains a dominant position, competitors around the world are also pushing for breakthroughs. For example, China’s DeepSeek has recently claimed it can create competitive chatbots while using far fewer chips than many U.S. rivals, adding to the growing international race for AI supremacy.

MLCommons’ report also included results from Advanced Micro Devices (AMD) and others, though Nvidia’s Blackwell system stood out in the training category.

Reddit Sues AI Firm Anthropic for Alleged Unauthorized Use of Data

Reddit has filed a lawsuit against artificial intelligence startup Anthropic, accusing it of illegally using Reddit’s content to train its AI models without permission or a licensing agreement. The suit was filed Wednesday in San Francisco Superior Court, marking the latest legal clash over AI companies’ use of third-party online content.

In the complaint, Reddit alleges that Anthropic has scraped and exploited data from the platform over 100,000 times, despite publicly claiming last year that it had blocked its bots from accessing Reddit. According to Reddit, Anthropic’s Claude chatbot even acknowledged it was trained on at least some Reddit data, but could not confirm whether deleted content had been included.

“Anthropic refuses to respect Reddit’s guardrails and enter into a license agreement,” the complaint says, contrasting the company’s stance with that of Google and OpenAI, both of which have entered licensing arrangements with Reddit.

Reddit claims Anthropic’s actions violate its user policies and have allowed the startup to enrich itself by “tens of billions of dollars.” The lawsuit seeks unspecified restitution, punitive damages, and an injunction to stop Anthropic from further using Reddit content for commercial purposes.

Anthropic Responds

An Anthropic spokesperson said the company disagrees with Reddit’s claims and intends to defend itself vigorously. The lawsuit adds further scrutiny to Anthropic, whose backers include tech giants Amazon and Alphabet (Google).

Anthropic recently launched its latest Claude models, Opus 4 and Sonnet 4, on May 22, and has reportedly reached $3 billion in annualized revenue, according to sources familiar with the matter.

Growing Legal Tensions Over AI Training Data

This legal dispute highlights a broader industry-wide debate over how AI companies source and utilize data to train large language models. Many websites and publishers argue that AI firms are profiting from content without compensating the creators, while AI companies contend that publicly available internet data falls under fair use.

In a statement, Reddit Chief Legal Officer Ben Lee emphasized the platform’s support for an open internet but said AI companies need “clear limitations” when it comes to scraping and monetizing content.

Both companies are headquartered in San Francisco, located just a few blocks apart.

The case has been filed under Reddit Inc v Anthropic PBC, California Superior Court, San Francisco County, No. CGC-25-524892.

Google and Chile Ink Deal for Trans-Pacific Submarine Cable to Boost Digital Connectivity

Alphabet’s Google has signed a landmark agreement with the Chilean government to deploy a 14,800-kilometer (9,196-mile) submarine data cable linking Chile with Australia and Asia. The cable is expected to be operational by 2027 and marks the first submarine cable project in the South Pacific, reinforcing Chile’s ambitions to become a regional digital hub for Latin America.

“This is an important commitment with an extraordinary strategic partner,” said Chile’s Transport Minister Juan Carlos Muñoz, emphasizing the cable’s role in improving connectivity with Asian nations, particularly China, which is Chile’s largest trading partner.

Open Access and Broader Goals

Cristian Ramos, head of telecommunications infrastructure for Alphabet’s Latin America operations, confirmed that the cable will be open for use by other entities, allowing technology firms operating in Chile to benefit from the improved infrastructure.

The cable’s deployment comes amid escalating technological competition between the U.S. and China in Latin America, with submarine cables becoming increasingly significant in their geopolitical rivalry.

Though exact costs have not been disclosed, Chilean authorities had previously estimated the project’s cost to range between $300 million and $550 million, with Chile contributing $25 million through its state-owned partner Desarrollo País.

Applications in Mining, Science, and Industry

The cable is expected to deliver a range of benefits, including better performance for Asian tech platforms like TikTok, enhanced astronomical data transmission, and improved coordination for mining operations shared between Chile and Australia.

“Mining companies with operations in both countries can consider shared command centers where teams can support each other in real-time,” noted Deputy Secretary of Telecommunications Claudio Araya.

Deployment will begin next year from the Chilean port city of Valparaiso. Chile is also evaluating an additional link connecting the cable to Argentina, further expanding the project’s regional impact.

Future Expansion and Antarctic Ambitions

The agreement could encourage similar projects connecting South America with Asia, further strengthening Chile’s digital infrastructure. Separately, Chile is planning another submarine cable project to link the southern tip of South America with Antarctica, mainly for scientific research purposes.

The partnership between Google and Chile is not only a technological milestone but also a reflection of broader strategic interests as digital infrastructure becomes central to global economic and political influence.