Yazılar

OpenAI Expands Stargate Scope, Eyes Debt Financing to Secure Chips

OpenAI is broadening the scope of its massive Stargate infrastructure project, originally unveiled at the White House earlier this year as a $500 billion initiative with partners including SoftBank and Oracle. Executives now say Stargate encompasses nearly all of OpenAI’s work involving data centers and AI chips, stretching beyond the original plan.

Initially conceived as a new entity for mega-scale AI infrastructure, Stargate has since expanded to cover projects predating its January announcement. OpenAI argues that only massive computing systems like Stargate can power the next phase of the AI revolution.

To finance its chip needs, the company plans to adopt creative strategies including debt financing and chip leasing, estimating savings of 10–15% by renting instead of buying GPUs outright. A newly announced partnership with Nvidia—worth up to $100 billion—will provide $10 billion in upfront cash and long-term backing for data center expansion.

CEO Sam Altman, who has long argued that data centers are the lifeblood of AI, said his goal is to reach the point of building “a gigawatt of new AI infrastructure every week.” Speaking at a briefing in Abilene, Texas—home to Stargate’s flagship site—he acknowledged investor concerns about a potential bubble but insisted long-term growth justifies the scale.

The Abilene facility, under construction by Oracle and Crusoe, spans more than 1,100 acres and employs thousands. The site is said to contain fiber optic cable long enough to stretch from Earth to the Moon and back.

Stargate’s rollout has faced delays due to partner negotiations and site selection challenges, according to SoftBank executives. Still, OpenAI, Oracle, and SoftBank this week announced five new U.S. data centers, bringing Stargate’s active projects to nearly 7 gigawatts of the 10 gigawatts originally targeted.

Executives said Microsoft, OpenAI’s longtime sponsor, will not be included in certain Stargate projects, following negotiations to allow OpenAI to partner more broadly.

The company stressed the urgency: demand for ChatGPT and related tools has already forced OpenAI to delay international product launches due to insufficient compute.

Industry experts note that financing remains a major hurdle. Of the roughly $50 billion cost for a new hyperscale data center, about $15 billion covers land and buildings—while the rest goes toward GPUs, which are both costly and in short supply. Following Meta’s example, which secured $29 billion from outside financiers for a Louisiana data center, OpenAI is expected to rely heavily on debt markets to fund its future sites, with Nvidia’s equity stake boosting lender confidence.

Despite bottlenecks in GPU supply chains, Altman maintains that rapid infrastructure buildouts are essential: “We cannot fall behind in the need to put the infrastructure together to make this revolution happen.”

Alibaba Shares Surge on Nvidia Partnership and Global AI Expansion

Alibaba announced on Wednesday a sweeping set of initiatives, including a partnership with Nvidia, new global data centers, and its largest-ever AI products, underscoring its pivot to make artificial intelligence a central business priority alongside e-commerce.

The news sent Alibaba’s Hong Kong-listed shares up nearly 10% to a four-year high, while its U.S.-listed shares also rose by a similar margin in premarket trading.

“The speed of AI industry development has far exceeded our expectations, and the industry’s demand for AI infrastructure has also far exceeded our expectations,” Alibaba CEO Eddie Wu said at the company’s annual Apsara Conference. He added that spending on AI will be increased, though without specifying figures. Earlier this year, Alibaba pledged 380 billion yuan ($53 billion) for AI infrastructure investments over three years.

As part of its strategy, Alibaba will collaborate with Nvidia to enhance physical AI capabilities including data synthesis, model training, environmental simulation, and validation testing.

The company also unveiled an ambitious global data center expansion plan, announcing facilities in Brazil, France, and the Netherlands, with more to follow in Mexico, Japan, South Korea, Malaysia, and Dubai within the next year. This will add to Alibaba’s current network of 91 data centers across 29 regions. The company did not specify whether Nvidia chips would power these new facilities.

At the same event, Alibaba launched its most advanced AI language model to date, Qwen3-Max, boasting over 1 trillion parameters. According to CTO Zhou Jingren, the model demonstrates strong performance in code generation and autonomous agent capabilities, allowing the AI to act more independently toward user-defined goals compared to traditional chatbots like ChatGPT.

Benchmark tests such as Tau2-Bench reportedly show Qwen3-Max outperforming competitors including Anthropic’s Claude and DeepSeek-V3.1 in specific categories.

Additional AI products showcased included Qwen3-Omni, a multimodal system designed for immersive applications in virtual and augmented reality, with potential use cases in smart glasses and intelligent vehicle cockpits.

The announcements come shortly after Nvidia revealed a $100 billion investment deal with OpenAI, highlighting the intensifying race in AI infrastructure.

Alibaba’s cloud division, which reported 26% revenue growth last quarter, is emerging as a key growth driver as the company monetizes its AI services more aggressively.

Rick Perry’s Data Center REIT Fermi Targets $13 Billion Valuation in U.S. IPO

Fermi, a real estate investment trust co-founded by former U.S. Energy Secretary Rick Perry, is seeking a valuation of up to $13.16 billion in its planned U.S. initial public offering, the company announced on Wednesday. The move comes as the surge in artificial intelligence drives demand for massive data center infrastructure.

The Amarillo, Texas-based firm aims to raise as much as $550 million by offering 25 million shares priced between $18 and $22 each.

Data centers have become prime assets as technology companies rush to build the computing power needed for advanced AI models. Fermi joins a growing list of AI-focused firms, such as CoreWeave and WhiteFiber, that have tapped public markets this year.

Founded in January 2025, Fermi has set its sights on developing the world’s largest energy and data complex, fueled by a combination of nuclear, natural gas, and solar power. Despite its ambitions, the company remains in an early development stage and has yet to generate revenue, reporting a $6.4 million loss since inception through June 30.

Fermi’s flagship initiative, known as Project Matador, plans to deliver up to 11 gigawatts of power for data centers by 2038, including one gigawatt ready by the end of 2026. The complex will span more than 5,200 acres in Texas and is expected to attract hyperscaler tenants.

“AI is arguably the investment story of a lifetime, but at this stage Fermi is still a story, and it’ll be interesting to see how much investors will pay for it,” said Matt Kennedy, senior strategist at Renaissance Capital. He described the valuation target as “very ambitious” for a development-stage company, highlighting the importance of securing contracts.

UBS, Evercore, Cantor and Mizuho are leading the IPO, with Fermi planning to list on both Nasdaq and the London Stock Exchange under the ticker “FRMI.” Proceeds from the offering will be used to purchase equipment and powered shells for the Texas complex.