Yazılar

CoreWeave Gains Role in Google-OpenAI Cloud Deal to Supply AI Computing Power

CoreWeave, a specialized cloud computing company built on Nvidia GPUs, has become a key provider in Google’s new partnership with OpenAI, sources told Reuters. Under the deal, CoreWeave will supply computing capacity to Google Cloud, which will then sell these resources to OpenAI to support growing demand for AI services such as ChatGPT. Google will also contribute some of its own computing infrastructure directly to OpenAI.

This arrangement underscores the evolving relationship between major cloud hyperscalers like Google, Microsoft, and Amazon and emerging “neocloud” providers like CoreWeave, which focus heavily on AI workloads. CoreWeave went public in March and already has a significant presence in OpenAI’s infrastructure, holding a five-year $11.9 billion contract and an equity investment of $350 million from OpenAI.

The partnership was expanded last month with an additional agreement worth up to $4 billion through 2029. Bringing Google Cloud onboard as a customer helps CoreWeave diversify its revenue while leveraging Google’s deep pockets to secure better financing for data center expansions. For Google, it enhances its cloud business by tapping into the surging AI market and positions it as a neutral provider of compute resources amid competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270% since its IPO, reflecting strong investor confidence despite concerns over leverage and GPU demand shifts. Meanwhile, Microsoft, CoreWeave’s former largest customer, is reconsidering its data center strategy and renegotiating investment terms with OpenAI.

Neither CoreWeave, Google, nor OpenAI commented on the details of the deal.

OpenAI Partners with Google Cloud in Surprising AI Rivalry Deal

OpenAI has struck a significant cloud computing deal with Alphabet’s Google Cloud to support its growing AI infrastructure needs, sources told Reuters. This collaboration, finalized in May, marks an unprecedented partnership between two major competitors in artificial intelligence.

The move signals OpenAI’s efforts to diversify beyond its longtime partner Microsoft, which had exclusively provided data center services until January. Google Cloud will now supply additional computing power to OpenAI for training and running its large language models, including ChatGPT.

The deal highlights the immense compute demands required for AI development and how competitive dynamics are evolving. Despite the fierce rivalry—OpenAI’s ChatGPT poses a strong challenge to Google’s dominant search business—both companies have chosen to cooperate in meeting infrastructure needs.

Alphabet’s stock rose 2.1% following the news, while Microsoft shares slipped 0.6%. Analysts at Scotiabank called the partnership “somewhat surprising” but a strategic win for Google Cloud, which has been aggressively expanding its AI hardware offerings, including tensor processing units (TPUs) used internally and for other customers like Apple.

OpenAI’s recent moves to reduce dependency on Microsoft include partnerships with SoftBank, Oracle, and CoreWeave, as well as plans to develop its own AI chips to cut reliance on external hardware providers. Meanwhile, Microsoft and OpenAI continue to renegotiate their multibillion-dollar investment terms.

Google’s Cloud business, generating $43 billion in sales in 2024, aims to capture market share against rivals Amazon and Microsoft by positioning itself as a neutral cloud provider favored by AI startups with costly infrastructure needs.

This deal presents a complex balancing act for Alphabet CEO Sundar Pichai, who must allocate limited chip capacity between competing demands from Google’s own AI projects and cloud customers. Despite ChatGPT’s threat to Google’s search dominance, Pichai remains confident in the company’s position.

Oppo Unveils Agentic AI Initiative, Introduces New System-Wide AI Search Feature

Oppo has revealed its ambitious plans for the future of artificial intelligence (AI) with the launch of its Agentic AI initiative at the Google Cloud Next 2025 event. The company is focused on advancing AI capabilities to create deeply personalized and intelligent experiences for its users. This new initiative aims to combine in-house AI development with a strategic collaboration with Google to introduce next-generation AI features that will enhance both hardware and software integration. Oppo is setting the stage for a future where AI agents, or agentic AI, take a central role in how users interact with their devices.

The core idea behind agentic AI is to create a system where a centralized AI model autonomously manages hardware and software components to perform tasks based on user commands. This innovative approach promises to make interactions with Oppo devices smarter, more intuitive, and highly personalized. Oppo’s goal is to ensure that its AI experiences are continuously refined, and the company is leveraging partnerships with industry leaders like Google Cloud to achieve this. Jason Liao, President of Oppo Research Institute, highlighted the company’s commitment to enhancing AI capabilities, signaling a new era for Oppo users in terms of seamless and intelligent device usage.

At the Google Cloud Next event, Oppo also unveiled a cutting-edge feature called AI Search. This system-wide AI tool will allow users to conduct multimodal searches across documents stored on their devices using natural language queries. With AI Search, users can quickly find specific information within their files directly from the home screen, streamlining how users interact with their content. This feature represents a leap forward in making AI a practical, everyday tool, seamlessly integrated into Oppo’s ecosystem. Additionally, Oppo highlighted its existing AI-driven features in areas such as productivity, imaging, and creativity, showcasing the breadth of its AI applications.

As part of this ambitious initiative, Oppo is developing a user knowledge system, which will serve as a central hub for storing and managing user data. This system is designed to tackle the issue of information fragmentation, a common problem with mobile devices, by creating a unified data repository. By leveraging this system, Oppo aims to further enhance the personalization of its AI features, ensuring that users’ experiences with their devices are not only smarter but also more tailored to their individual needs and preferences. With Agentic AI at its core, Oppo is positioning itself at the forefront of AI innovation, offering users more powerful and intuitive tech experiences.