Yazılar

Oracle to Purchase $40 Billion Worth of Nvidia Chips for OpenAI’s US Data Center

Oracle is set to invest approximately $40 billion in purchasing Nvidia’s high-performance chips to support OpenAI’s new data center in the United States, according to a report by the Financial Times. This significant investment highlights the growing collaboration between cloud service providers and AI companies as they race to build advanced infrastructure to power next-generation artificial intelligence applications.

The new data center will be located in Abilene, Texas, and forms a critical part of the U.S. Stargate Project, a government-backed initiative involving leading AI companies aimed at strengthening America’s position in the global AI race. This project reflects the increasing emphasis on domestic AI capabilities amid intensifying competition with other countries developing their own AI technologies.

Oracle plans to acquire roughly 400,000 of Nvidia’s most advanced GB200 chips, which will be leased to OpenAI to provide the massive computing power required for AI workloads. While Oracle and OpenAI have not publicly commented on the deal, sources familiar with the arrangement confirmed the details to the Financial Times. Nvidia also declined to comment on the specifics.

The data center is expected to become fully operational by mid-2026, with Oracle securing a 15-year lease on the site. Financing for the project is backed primarily by JPMorgan, which has extended two loans totaling $9.6 billion, while the facility’s owners, Crusoe and Blue Owl Capital, have contributed approximately $5 billion in cash. This large-scale investment underscores the commitment of both private and public sectors to accelerate AI development on U.S. soil.

Microsoft to Integrate Elon Musk’s xAI Models Into Azure Cloud Platform

Microsoft Expands Azure AI Marketplace With Elon Musk’s Grok 3 Models

Microsoft is bringing Elon Musk’s xAI models to its Azure cloud platform, expanding its artificial intelligence (AI) model marketplace with new capabilities. The company confirmed that Grok 3, the latest AI model developed by xAI and launched earlier this year, will be available to Azure customers. This addition is part of Microsoft’s broader effort to cement Azure as a leading platform for deploying and managing cutting-edge AI applications.

The tech giant is competing with other major cloud service providers like Amazon Web Services (AWS) and Google Cloud to become the primary destination for AI development. As the demand for diverse and powerful AI models grows, so does the competition among cloud providers to host them. With Grok 3 joining Azure’s marketplace, Microsoft now offers access to over 1,900 models, including those from OpenAI, Meta, and DeepSeek. However, notable absences remain, such as models from Google and the fast-rising startup Anthropic.

Microsoft also made a series of AI-focused announcements during the opening of Build 2025, its annual developer conference. Many of the updates highlighted the company’s efforts to enhance agent-based AI systems—intelligent tools designed to act on a user’s behalf. A key focus was on integrating industry standards like Anthropic’s Model Context Protocol (MCP), which helps AI agents interface more effectively with digital content and applications. Microsoft confirmed that Windows and other products will adopt MCP, ensuring broader compatibility with future AI tools.

In support of this initiative, Microsoft and its GitHub subsidiary have also joined the MCP steering committee, further reinforcing their commitment to open AI collaboration and interoperability. “In order for agents to be as useful as they could be, they need to be able to talk to everything in the world,” said Kevin Scott, Microsoft’s Chief Technology Officer, emphasizing the importance of cross-platform communication and AI accessibility in the future of computing.

AI Takes Center Stage at Microsoft’s Developer Conference Amid Profit Push

Microsoft’s Developer Conference Highlights AI Strategy and Profit Goals

Microsoft kicked off its annual developer conference in Seattle on Monday, gathering thousands of software developers eager to transform the company’s extensive artificial intelligence investments into tangible, revenue-generating tools. The focus of this year’s event is squarely on monetizing AI—turning years of research and infrastructure development into products and services for both consumers and enterprises.

The tech giant, based in Redmond, Washington, has already invested a staggering $64 billion this year—much of it funneled into data centers that support AI-powered features like Copilot, which is integrated into Microsoft 365 applications. Its deep partnership with OpenAI, the creator of ChatGPT, remains central to its strategy, even as the dynamics of that relationship begin to shift.

There are growing indications that Microsoft is recalibrating its role in the AI ecosystem. Despite its close ties with OpenAI, Microsoft recently allowed the AI firm to collaborate with Oracle on the ambitious “Stargate” data center project in Texas. This move suggests Microsoft is positioning itself more as a platform provider—a “neutral arms dealer” in the intensifying AI race—rather than maintaining exclusive strategic control.

CEO Satya Nadella has also emphasized efficiency, stating that once an AI algorithm is refined, performance improvements can drive down computing costs significantly—up to tenfold. This efficiency is key as demand for AI services hosted on Microsoft’s Azure cloud continues to rise. According to Cantor Fitzgerald analyst Thomas Blakey, Microsoft is increasingly retaining AI services within its own data centers, giving it tighter control over cost, performance, and profitability.