Yazılar

OpenAI, Oracle and SoftBank to Build Five New AI Data Centers for $500 Billion Stargate Project

OpenAI, Oracle and SoftBank announced plans to construct five new artificial intelligence data centers in the United States as part of their massive Stargate project, an initiative expected to reshape AI infrastructure.

President Donald Trump hosted leading tech CEOs in January to launch Stargate, a private-sector effort aiming to spend up to $500 billion on the compute power needed to support the next generation of AI.

OpenAI and Oracle will build three new facilities in Shackelford County, Texas, Doña Ana County, New Mexico, and an undisclosed Midwestern site. Together with SoftBank and its affiliate, OpenAI will also develop two additional centers in Lordstown, Ohio, and Milam County, Texas.

These new facilities, combined with Oracle-OpenAI’s Abilene, Texas expansion and ongoing projects with CoreWeave, will boost Stargate’s total data center capacity to nearly 7 gigawatts. According to OpenAI, this represents over $400 billion in investments over the next three years. The ultimate goal remains 10 gigawatts of total capacity.

“AI can only fulfill its promise if we build the compute to power it,” OpenAI CEO Sam Altman said in a statement.

The new projects are expected to create 25,000 on-site jobs. The announcement follows Nvidia’s pledge on Monday to invest up to $100 billion in OpenAI and supply data center chips.

To finance Stargate, OpenAI and its partners plan to use debt financing and lease chips, according to sources familiar with the matter.

With backing from Microsoft, OpenAI joins other tech giants pouring billions into AI infrastructure to support services such as ChatGPT and Copilot.

Given AI’s growing importance in sensitive fields like defense—and with China racing to catch up—both the private sector and the Trump administration have made AI infrastructure a strategic priority.

Nvidia’s $100B OpenAI deal sparks funding, valuation, and competition questions

Nvidia’s plan to invest up to $100 billion in OpenAI — while also supplying millions of its GPUs to the ChatGPT maker — is unprecedented in the tech sector and raises major uncertainties about finance, competition, and market impact.

Key open questions:

1. Where does the rest of the money come from?

  • Nvidia has pledged $10B per gigawatt for 10 GW of compute, but CEO Jensen Huang estimates $50B is needed per gigawatt (with $35B of that spent on Nvidia hardware).

  • That leaves a massive $40B funding gap per GW. OpenAI has not disclosed how it will raise the remainder.

2. How does this fit OpenAI’s shift to for-profit?

  • OpenAI is transitioning from a nonprofit into a public benefit corporation overseen by its nonprofit parent.

  • Nvidia’s investment may hinge on this structure, but it’s unclear if funding flows to the nonprofit entity or the restructured PBC.

  • Regulatory approval in Delaware and California is still pending.

3. What does it mean for OpenAI’s valuation?

  • Nvidia’s initial $10B tranche is pegged to OpenAI’s current $500B valuation.

  • But there’s no timeline for deploying the full 10 GW or committing the entire $100B. Future investments may depend on OpenAI’s valuation at the time, raising uncertainty about dilution and pricing.

4. How will competition be affected?

  • Nvidia’s chips remain the most coveted resource in AI. By tying up vast capacity with OpenAI, rivals like Anthropic, Google, or even Microsoft could face constraints in access.

  • Competitors like AMD may find it harder to gain traction if Nvidia prioritizes OpenAI, despite Nvidia’s public pledge to “make every customer a top priority.”

5. What does it mean for Oracle?

  • Oracle has signed hundreds of billions in cloud contracts with OpenAI, but analysts question whether OpenAI has the liquidity to pay.

  • Nvidia’s cash infusion could strengthen Oracle’s revenue outlook, reassuring investors and credit agencies like Moody’s, which flagged funding risks.

Big picture:

The deal deepens the interdependence of AI’s leading players — Nvidia for chips, OpenAI for models, Microsoft for software integration, and Oracle for cloud. But it also amplifies antitrust concerns, as U.S. regulators eye whether such alliances foreclose competition in the AI stack.

Meta expands Llama AI access to U.S. allies in Europe and Asia

Meta Platforms said Tuesday it will make its Llama artificial intelligence system available to U.S. allies including France, Germany, Italy, Japan, and South Korea, as well as to NATO and European Union institutions. The announcement follows U.S. approval for federal agencies to use Llama earlier this week.

Llama, a large language model capable of processing text, video, images, and audio, will now be deployed more broadly as part of Washington’s effort to strengthen digital cooperation with democratic allies.

Meta said it will work with partners such as Microsoft, Amazon Web Services, Oracle, and Palantir to deliver Llama-based solutions abroad. The company emphasized that its models are released largely free for developers, a strategy CEO Mark Zuckerberg argues will drive innovation, reduce reliance on rivals, and keep engagement strong across Meta’s platforms.

The U.S. General Services Administration confirmed Monday that Llama would be added to its list of approved AI tools for federal use, meeting security and legal standards. By extending access to allies, Meta and Washington aim to align AI infrastructure across friendly nations at a time of intensifying global competition in artificial intelligence.