Yazılar

Applied Digital Strikes $5 Billion AI Infrastructure Deal with U.S. Hyperscaler

Applied Digital (APLD.O) announced on Wednesday that it has signed a $5 billion, 15-year lease agreement with a U.S.-based hyperscaler for 200 megawatts (MW) of capacity at its Polaris Forge 2 data center campus in North Dakota, solidifying its position as a major player in AI infrastructure development. The deal sent Applied Digital’s shares up 4% in premarket trading.

The agreement is expected to generate about $5 billion in contracted revenue over its term and reflects the surging demand for high-performance compute capacity driven by the rapid adoption of artificial intelligence applications. Tech giants and AI developers are racing to secure energy-intensive infrastructure capable of training and deploying advanced language and vision models.

With this latest contract, Applied Digital’s total leased capacity across its Polaris Forge 1 and 2 campuses now reaches 600 MW, marking a significant milestone in its expansion strategy. The company also recently finalized a separate 150 MW lease with CoreWeave (CRWV.O) earlier this year, underscoring its growing role as a key infrastructure provider for the AI ecosystem.

Applied Digital’s stock has soared more than 325% in 2025, buoyed by investor enthusiasm for companies building AI-ready data centers capable of handling the computational load required by large language models and generative AI systems.

Industry analysts say the deal highlights how AI infrastructure has become the new frontier of big tech investment, with hyperscalers — massive cloud computing companies such as Google, Amazon, and Microsoft — locking in long-term capacity agreements to meet explosive AI demand.

The company’s Polaris Forge complex in North Dakota is one of several U.S. projects focused on delivering high-density compute environments optimized for AI workloads. Applied Digital said the partnership will also support future energy efficiency improvements and renewable power integration, aligning with broader sustainability goals across the data center industry.

CoreWeave Gains Role in Google-OpenAI Cloud Deal to Supply AI Computing Power

CoreWeave, a specialized cloud computing company built on Nvidia GPUs, has become a key provider in Google’s new partnership with OpenAI, sources told Reuters. Under the deal, CoreWeave will supply computing capacity to Google Cloud, which will then sell these resources to OpenAI to support growing demand for AI services such as ChatGPT. Google will also contribute some of its own computing infrastructure directly to OpenAI.

This arrangement underscores the evolving relationship between major cloud hyperscalers like Google, Microsoft, and Amazon and emerging “neocloud” providers like CoreWeave, which focus heavily on AI workloads. CoreWeave went public in March and already has a significant presence in OpenAI’s infrastructure, holding a five-year $11.9 billion contract and an equity investment of $350 million from OpenAI.

The partnership was expanded last month with an additional agreement worth up to $4 billion through 2029. Bringing Google Cloud onboard as a customer helps CoreWeave diversify its revenue while leveraging Google’s deep pockets to secure better financing for data center expansions. For Google, it enhances its cloud business by tapping into the surging AI market and positions it as a neutral provider of compute resources amid competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270% since its IPO, reflecting strong investor confidence despite concerns over leverage and GPU demand shifts. Meanwhile, Microsoft, CoreWeave’s former largest customer, is reconsidering its data center strategy and renegotiating investment terms with OpenAI.

Neither CoreWeave, Google, nor OpenAI commented on the details of the deal.

TensorWave Raises $100 Million to Expand AMD-Powered AI Infrastructure

TensorWave, a Las Vegas-based AI infrastructure startup, has raised $100 million in a Series A funding round to scale operations and meet rising demand for high-performance AI computing. The company did not disclose its current valuation.

The round was led by Magnetar and AMD Ventures, with participation from existing backers Maverick Silicon and Nexus Venture Partners, along with new investor Prosperity7.

As AI model development becomes increasingly compute-intensive, firms like TensorWave are positioning themselves as essential enablers by building GPU-based infrastructure designed for efficient model training and workload optimization.

This $100M funding propels TensorWave’s mission to democratize access to cutting-edge AI compute,” said CEO Darrick Horton.

Strategic Focus and Market Context

TensorWave plans to use the fresh capital to:

  • Scale operations and expand its team

  • Deploy AMD-powered GPU clusters

  • Accelerate delivery of infrastructure tailored to AI workloads

The announcement comes amid projections that the global AI infrastructure market will exceed $400 billion by 2027, driven by the rapid adoption of generative AI, machine learning, and data-intensive applications.

Unlike many competitors reliant on Nvidia hardware, TensorWave’s focus on AMD GPUs could offer cost advantages and diversification for AI developers seeking alternatives in a supply-constrained market.

Industry Momentum

The funding reflects growing investor confidence in companies that support the underlying layers of AI innovationparticularly those offering scalable, affordable compute infrastructure for startups, research institutions, and enterprises alike.

TensorWave joins a wave of AI infrastructure startups benefiting from explosive interest in model training platforms, data center hardware, and cloud-based acceleration solutions amid ongoing AI commercialization.