Yazılar

AMD Unveils AI Server and Chips as OpenAI Joins Development Effort

AMD CEO Lisa Su introduced a major new line of AI hardware on Thursday, unveiling both the MI350 and MI400 series of AI chips and announcing plans to release the company’s first AI server, called “Helios,” in 2026. The launch signals AMD’s most direct challenge yet to Nvidia’s dominance in the AI server and chip market.

The announcement was made at AMD’s “Advancing AI” developer conference in San Jose, California. The Helios servers will house 72 MI400 chips, designed to compete directly with Nvidia’s current NVL72 servers powered by its Blackwell processors. In a notable difference from Nvidia’s closed ecosystem, Su emphasized that many aspects of AMD’s server and networking standards would be openly available to the broader industry, including rivals like Intel.

“The future of AI is not going to be built by any one company or in a closed ecosystem. It’s going to be shaped by open collaboration across the industry,” Su stated.

Su was joined on stage by OpenAI CEO Sam Altman, who confirmed that OpenAI is already working with AMD on its MI450 chips to help optimize them for AI workloads. Altman remarked on OpenAI’s rapid infrastructure growth, calling the pace “crazy” and noting continued expansion with AMD’s hardware.

Executives from Meta Platforms, Elon Musk’s xAI, and Oracle also appeared during the event to showcase how their companies are adopting AMD’s processors. Additionally, Crusoe, a cloud provider specializing in AI, disclosed plans to purchase $400 million worth of AMD’s new chips.

Despite the announcement, AMD shares slipped 2.2%, with analysts suggesting the company still faces significant headwinds in dislodging Nvidia’s dominant market position. Summit Insights analyst Kinngai Chan noted that the newly announced products are unlikely to shift the competitive balance immediately.

AMD has aggressively expanded its AI capabilities over the past year, completing its acquisition of server manufacturer ZT Systems in March and making 25 strategic investments in AI-related startups. The company recently hired engineers from Untether AI and Lamini, further strengthening its chip design and software development teams.

However, AMD’s ROCm software stack continues to lag behind Nvidia’s highly entrenched CUDA platform, which many in the industry see as a major factor behind Nvidia’s dominance.

Nevertheless, AMD remains optimistic about its growth prospects, even as U.S. export controls tighten on AI chip sales to China. When reporting earnings in May, Su reiterated expectations for strong double-digit growth in AI chip sales despite these headwinds.

Exclusive: Crusoe’s ‘Neocloud’ to Buy $400 Million in AMD AI Chips for Data Centers

Crusoe, an artificial intelligence-focused cloud computing startup, revealed plans to purchase approximately $400 million worth of AI chips from Advanced Micro Devices (AMD) to power its AI data centers. CEO Chase Lochmiller told Reuters that Crusoe intends to acquire around 13,000 AMD MI355X chips for a new data center cluster in the U.S., which is expected to become operational this fall.

The data center will employ liquid cooling technology and be designed specifically to house AI chips, offering higher performance compared to older infrastructure. Crusoe will rent access to this facility, which can be partitioned among multiple clients or used entirely by a single customer.

Lochmiller emphasized Crusoe’s agility as a smaller startup, competing with larger hyperscalers by leveraging speed, nimbleness, and concentrated engineering talent.

AMD’s MI355X chips, featuring high-bandwidth memory, are optimized for AI inference tasks, providing an alternative to Nvidia’s dominant hardware in the AI chip market. While many AI cloud services rely on Nvidia chips, AMD is positioning itself to capture a share by partnering with companies like Crusoe.

Lochmiller described this approach as a validation of the “neocloud” strategy — specialized cloud infrastructure platforms tailored for AI workloads that add significant value to the AI ecosystem by supporting large-scale users.

Crusoe Secures $11.6 Billion to Expand Texas AI Data Center, Supporting OpenAI Infrastructure

AI infrastructure startup Crusoe has raised an additional $11.6 billion to significantly expand its upcoming data center in Abilene, Texas, marking one of the largest funding rounds in the emerging “neocloud” space. The new capital brings the total raised for the project to $15 billion and will allow Crusoe to expand the facility from two to eight buildings, the company confirmed on Wednesday.

Founded in 2018 as a crypto-focused firm, Crusoe has since pivoted to become a specialized cloud provider for AI workloads, part of a new wave of “neoclouds” that offer tailored infrastructure beyond the traditional giants like AWS, Azure, and Google Cloud.

Crusoe has been contracted by Oracle to construct the first data center for Stargate — a major AI infrastructure initiative backed by OpenAI, SoftBank, and Oracle, with a planned $500 billion investment in global AI infrastructure. According to The Wall Street Journal, the Abilene facility is set to become OpenAI’s largest data center.

“Our customer is Oracle. OpenAI is Oracle’s customer,” Crusoe clarified in a statement, emphasizing its indirect yet vital role in supporting the ChatGPT creator’s infrastructure needs.

The project is seen as part of OpenAI’s long-term goal to reduce reliance on Microsoft, its current primary cloud provider.

Key Details:

  • Location: Abilene, Texas

  • Total Buildings: 8 (up from 2)

  • AI Chips: Each building will house up to 50,000 Nvidia Blackwell systems

  • Sponsors: Crusoe, Blue Owl’s Real Assets platform, and Primary Digital Infrastructure

The facility will support intensive generative AI workloads, crucial for OpenAI’s future model development and deployment.

The explosive growth in demand for AI compute capacity has fueled an investment boom in data centers powered by specialized chips like Nvidia’s Blackwell series — a market Crusoe is aggressively entering.

Neither OpenAI nor Nvidia responded to requests for comment at the time of publication.