Yazılar

OpenAI Denies Plans to Use Google’s In-House AI Chips Despite Cloud Collaboration

OpenAI has clarified that it has no current plans to adopt Google’s in-house AI chips (TPUs) to power its products, pushing back against recent reports that suggested the ChatGPT maker was turning to its rival’s hardware to meet increasing computing demands.

A spokesperson for OpenAI stated on Sunday that while the company is testing Google’s TPUs in early stages, there are no plans to deploy them at scale for production use. Google, for its part, declined to comment on the matter.

Testing multiple AI chip platforms is standard industry practice, but shifting large-scale workloads to a new hardware platform would require significant architectural and software adjustments. Currently, OpenAI continues to rely heavily on Nvidia’s GPUs and is also utilizing AMD’s AI chips to fuel its operations. Additionally, OpenAI is actively developing its own custom AI chip, expected to reach the “tape-out” milestone later this year — marking the point where chip design is finalized for manufacturing.

Earlier this month, Reuters reported that OpenAI had signed on to use Google Cloud services, a move seen as a notable collaboration between two competitors in the generative AI space. However, the bulk of OpenAI’s computing needs are still being handled by CoreWeave, a cloud provider specializing in GPU-based infrastructure.

Google has recently begun expanding external access to its TPUs, previously used mostly for internal projects. This shift has attracted a number of high-profile customers, including Apple, as well as AI startups Anthropic and Safe Superintelligence (SSI) — both of which were founded by former OpenAI executives and are direct rivals in the AI field.

Dutch Chipmaker AxeleraAI Receives $66 Million EU Grant for AI Chip Development

AxeleraAI, a prominent Dutch chipmaker focused on artificial intelligence (AI), has secured a grant of up to 61.6 million euros ($66 million) to develop a new chip designed for data centres, in line with European Union efforts to strengthen its AI capabilities.

The EU’s initiative aims to close the AI competitiveness gap between Europe, the United States, and China, by funding domestic chipmakers and establishing publicly funded AI factories—data centres that will be accessible to European scientists, companies, and startups.

Fabrizio Del Maffeo, AxeleraAI’s CEO, expressed his pride in the grant and the opportunity to expand the company’s business. AxeleraAI, based in Eindhoven, Netherlands, won the funding from EuroHPC, the agency responsible for the EU’s supercomputer and AI factory network. The company plans to use the funds to develop a chip tailored for “inference” AI computing, a process crucial for running AI models once they have been trained.

While AxeleraAI is not aiming to challenge Nvidia’s dominance in the data centre space, particularly in training large AI models, Del Maffeo emphasized that their chip will provide high-performance solutions for inference computing once networks are ready for deployment.

In addition, the rise of cost-effective AI models, like China’s DeepSeek, may drive increased demand for inference computing, offering AxeleraAI a valuable market opportunity. The company’s upcoming Titania chip will be built on the open-source RISC-V standard, an alternative to systems like Intel and Arm, gaining traction in industries like automotive and China.

AxeleraAI’s current chip, Metis, is used in “edge AI” applications, such as analyzing CCTV footage in factories to identify safety issues. Founded in 2021, AxeleraAI has previously raised $200 million in investments, including from Samsung.