OpenAI Denies Plans to Use Google’s In-House AI Chips Despite Cloud Collaboration
OpenAI has clarified that it has no current plans to adopt Google’s in-house AI chips (TPUs) to power its products, pushing back against recent reports that suggested the ChatGPT maker was turning to its rival’s hardware to meet increasing computing demands.
A spokesperson for OpenAI stated on Sunday that while the company is testing Google’s TPUs in early stages, there are no plans to deploy them at scale for production use. Google, for its part, declined to comment on the matter.
Testing multiple AI chip platforms is standard industry practice, but shifting large-scale workloads to a new hardware platform would require significant architectural and software adjustments. Currently, OpenAI continues to rely heavily on Nvidia’s GPUs and is also utilizing AMD’s AI chips to fuel its operations. Additionally, OpenAI is actively developing its own custom AI chip, expected to reach the “tape-out” milestone later this year — marking the point where chip design is finalized for manufacturing.
Earlier this month, Reuters reported that OpenAI had signed on to use Google Cloud services, a move seen as a notable collaboration between two competitors in the generative AI space. However, the bulk of OpenAI’s computing needs are still being handled by CoreWeave, a cloud provider specializing in GPU-based infrastructure.
Google has recently begun expanding external access to its TPUs, previously used mostly for internal projects. This shift has attracted a number of high-profile customers, including Apple, as well as AI startups Anthropic and Safe Superintelligence (SSI) — both of which were founded by former OpenAI executives and are direct rivals in the AI field.











