Yazılar

OpenAI Denies Plans to Use Google’s In-House AI Chips Despite Cloud Collaboration

OpenAI has clarified that it has no current plans to adopt Google’s in-house AI chips (TPUs) to power its products, pushing back against recent reports that suggested the ChatGPT maker was turning to its rival’s hardware to meet increasing computing demands.

A spokesperson for OpenAI stated on Sunday that while the company is testing Google’s TPUs in early stages, there are no plans to deploy them at scale for production use. Google, for its part, declined to comment on the matter.

Testing multiple AI chip platforms is standard industry practice, but shifting large-scale workloads to a new hardware platform would require significant architectural and software adjustments. Currently, OpenAI continues to rely heavily on Nvidia’s GPUs and is also utilizing AMD’s AI chips to fuel its operations. Additionally, OpenAI is actively developing its own custom AI chip, expected to reach the “tape-out” milestone later this year — marking the point where chip design is finalized for manufacturing.

Earlier this month, Reuters reported that OpenAI had signed on to use Google Cloud services, a move seen as a notable collaboration between two competitors in the generative AI space. However, the bulk of OpenAI’s computing needs are still being handled by CoreWeave, a cloud provider specializing in GPU-based infrastructure.

Google has recently begun expanding external access to its TPUs, previously used mostly for internal projects. This shift has attracted a number of high-profile customers, including Apple, as well as AI startups Anthropic and Safe Superintelligence (SSI) — both of which were founded by former OpenAI executives and are direct rivals in the AI field.

CoreWeave Gains Role in Google-OpenAI Cloud Deal to Supply AI Computing Power

CoreWeave, a specialized cloud computing company built on Nvidia GPUs, has become a key provider in Google’s new partnership with OpenAI, sources told Reuters. Under the deal, CoreWeave will supply computing capacity to Google Cloud, which will then sell these resources to OpenAI to support growing demand for AI services such as ChatGPT. Google will also contribute some of its own computing infrastructure directly to OpenAI.

This arrangement underscores the evolving relationship between major cloud hyperscalers like Google, Microsoft, and Amazon and emerging “neocloud” providers like CoreWeave, which focus heavily on AI workloads. CoreWeave went public in March and already has a significant presence in OpenAI’s infrastructure, holding a five-year $11.9 billion contract and an equity investment of $350 million from OpenAI.

The partnership was expanded last month with an additional agreement worth up to $4 billion through 2029. Bringing Google Cloud onboard as a customer helps CoreWeave diversify its revenue while leveraging Google’s deep pockets to secure better financing for data center expansions. For Google, it enhances its cloud business by tapping into the surging AI market and positions it as a neutral provider of compute resources amid competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270% since its IPO, reflecting strong investor confidence despite concerns over leverage and GPU demand shifts. Meanwhile, Microsoft, CoreWeave’s former largest customer, is reconsidering its data center strategy and renegotiating investment terms with OpenAI.

Neither CoreWeave, Google, nor OpenAI commented on the details of the deal.

Nvidia’s New AI Chips Slash Training Times for Massive AI Models

Nvidia’s latest generation of AI chips is making significant advances in training some of the world’s largest artificial intelligence systems, according to new benchmark data released on Wednesday by MLCommons, a nonprofit organization that tracks AI system performance.

The results show a dramatic drop in the number of chips required to train large language models (LLMs), highlighting Nvidia’s growing technological lead in this critical area of AI development. While much of the financial market’s current focus is on the booming sector of AI inference—where AI models answer user queries—training remains a core competitive battleground, especially for developing next-generation models with trillions of parameters.

Blackwell Chips Outperform Previous Generations

Nvidia’s new Blackwell chips demonstrated superior performance over its previous Hopper generation. In tests involving Meta Platforms’ open-source Llama 3.1 405B model, which is complex enough to simulate some of the most demanding AI training workloads, Nvidia’s Blackwell chips completed training tasks with more than double the speed per chip compared to Hopper.

In one benchmark, a system using 2,496 Blackwell chips completed the training run in just 27 minutes. By comparison, even though more than three times as many Hopper chips were used in previous tests, they only achieved faster results due to sheer scale rather than efficiency.

Nvidia and its partners were the only ones to submit data for models of this size, giving Nvidia a clear demonstration of its leadership in training capabilities for multi-trillion parameter models.

Changing Industry Trends in AI Training

Chetan Kapoor, chief product officer of CoreWeave, which collaborated with Nvidia on the results, noted that AI companies are moving away from building vast, homogenous data centers with 100,000 or more identical chips. Instead, they are increasingly assembling smaller, specialized subsystems that handle different aspects of the training process. This modular approach allows companies to speed up training times and manage extremely large model sizes more efficiently.

“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor explained at a press briefing.

Global Competition Also Heating Up

While Nvidia maintains a dominant position, competitors around the world are also pushing for breakthroughs. For example, China’s DeepSeek has recently claimed it can create competitive chatbots while using far fewer chips than many U.S. rivals, adding to the growing international race for AI supremacy.

MLCommons’ report also included results from Advanced Micro Devices (AMD) and others, though Nvidia’s Blackwell system stood out in the training category.