Yazılar

Samsung and SK Hynix to Supply Chips for OpenAI’s $500 Billion Stargate Project

Samsung Electronics and SK Hynix, South Korea’s top semiconductor manufacturers, have signed letters of intent to supply memory chips for OpenAI’s massive Stargate project, marking a major step in Seoul’s growing role in global artificial intelligence infrastructure.

As part of the deal, OpenAI will collaborate with both companies to build two new AI data centers in South Korea, branded as “Korean-style Stargate,” aligning with President Lee Jae Myung’s goal of turning the country into an AI innovation hub in Asia. The decision leverages South Korea’s strong industrial base and its status as the world’s second-largest ChatGPT subscription market after the United States.

The agreements were announced on Wednesday following a high-profile meeting in Seoul between OpenAI CEO Sam Altman, President Lee Jae Myung, and the chairmen of Samsung Electronics and SK Hynix.

The Stargate project, unveiled by U.S. President Donald Trump in January, aims to invest $500 billion into developing next-generation AI infrastructure with global partners such as SoftBank, Oracle, and now the South Korean chip giants. The initiative seeks to secure the computing capacity needed to sustain AI’s rapid growth and maintain U.S. leadership in the field.

South Korea’s presidential adviser Kim Yong-beom revealed that OpenAI plans to order 900,000 semiconductor wafers by 2029 and establish joint ventures with Samsung and SK Hynix to operate two 20-megawatt-capacity data centers domestically.

“The significant part of the Stargate project would be impossible without memory chips from the two companies,” said Kim.

He added that South Korea may also participate in financing the project.

Altman, in his remarks, emphasized the strategic importance of Korea:

“Korea has an industrial base like nowhere else in the world that is critical for the development of AI. We’re very excited to build Stargate Korea with Samsung and Hynix to support the sovereign AI needs of the country.”

Together, Samsung and SK Hynix control about 70% of the global DRAM market and nearly 80% of the HBM (High Bandwidth Memory) market. HBM technology, introduced in 2013, stacks chips vertically to save space, boost performance, and reduce power consumption, making it vital for AI data processing.

Analysts estimate that 900,000 wafers of advanced DRAM could be worth more than 100 trillion won ($70 billion), though prices may fluctuate depending on market conditions.

In addition to the memory supply deals:

  • Samsung SDS, an IT services affiliate, signed a partnership with OpenAI to develop and operate AI data centers under the Stargate framework.

  • Samsung Heavy Industries and Samsung C&T will collaborate on floating offshore data centers, designed to reduce cooling costs and carbon emissions.

Meanwhile, Google has also been in talks with several South Korean companies to explore potential AI collaborations. In June, SK Group announced a 7 trillion won investment, including $4 billion from Amazon Web Services, to build another major data center in the country.

Despite optimism about AI’s transformative potential, some investors remain cautious, citing the risk of a tech infrastructure bubble as companies rush to build large-scale data facilities.

The Stargate project, delayed earlier by prolonged negotiations and site selection, is now poised to gain new momentum through this South Korea partnership, reinforcing the nation’s position at the heart of the global AI supply chain.

OpenAI Expands Stargate Scope, Eyes Debt Financing to Secure Chips

OpenAI is broadening the scope of its massive Stargate infrastructure project, originally unveiled at the White House earlier this year as a $500 billion initiative with partners including SoftBank and Oracle. Executives now say Stargate encompasses nearly all of OpenAI’s work involving data centers and AI chips, stretching beyond the original plan.

Initially conceived as a new entity for mega-scale AI infrastructure, Stargate has since expanded to cover projects predating its January announcement. OpenAI argues that only massive computing systems like Stargate can power the next phase of the AI revolution.

To finance its chip needs, the company plans to adopt creative strategies including debt financing and chip leasing, estimating savings of 10–15% by renting instead of buying GPUs outright. A newly announced partnership with Nvidia—worth up to $100 billion—will provide $10 billion in upfront cash and long-term backing for data center expansion.

CEO Sam Altman, who has long argued that data centers are the lifeblood of AI, said his goal is to reach the point of building “a gigawatt of new AI infrastructure every week.” Speaking at a briefing in Abilene, Texas—home to Stargate’s flagship site—he acknowledged investor concerns about a potential bubble but insisted long-term growth justifies the scale.

The Abilene facility, under construction by Oracle and Crusoe, spans more than 1,100 acres and employs thousands. The site is said to contain fiber optic cable long enough to stretch from Earth to the Moon and back.

Stargate’s rollout has faced delays due to partner negotiations and site selection challenges, according to SoftBank executives. Still, OpenAI, Oracle, and SoftBank this week announced five new U.S. data centers, bringing Stargate’s active projects to nearly 7 gigawatts of the 10 gigawatts originally targeted.

Executives said Microsoft, OpenAI’s longtime sponsor, will not be included in certain Stargate projects, following negotiations to allow OpenAI to partner more broadly.

The company stressed the urgency: demand for ChatGPT and related tools has already forced OpenAI to delay international product launches due to insufficient compute.

Industry experts note that financing remains a major hurdle. Of the roughly $50 billion cost for a new hyperscale data center, about $15 billion covers land and buildings—while the rest goes toward GPUs, which are both costly and in short supply. Following Meta’s example, which secured $29 billion from outside financiers for a Louisiana data center, OpenAI is expected to rely heavily on debt markets to fund its future sites, with Nvidia’s equity stake boosting lender confidence.

Despite bottlenecks in GPU supply chains, Altman maintains that rapid infrastructure buildouts are essential: “We cannot fall behind in the need to put the infrastructure together to make this revolution happen.”

AI Startup Modular Raises $250 Million to Take On Nvidia’s Software Dominance

AI startup Modular announced Wednesday it has raised $250 million in fresh funding, giving the company a valuation of $1.6 billion as it looks to loosen Nvidia’s grip on the AI computing ecosystem.

The round, which nearly tripled Modular’s valuation from two years ago, was led by the U.S. Innovative Technology Fund with participation from DFJ Growth and existing backers GV, General Catalyst, and Greylock.

Founded in 2022 by former Apple and Google engineers, Modular has built a platform that lets developers run AI applications across multiple types of chips without rewriting code for each one. Its clients include cloud providers such as Oracle and Amazon, as well as chipmakers Nvidia and AMD.

Nvidia’s dominance—holding more than 80% of the high-end AI chip market—is reinforced by its proprietary CUDA software, which locks in over 4 million developers worldwide. Modular positions itself as a neutral alternative, branding its approach the “Switzerland strategy.”

Co-founder and CEO Chris Lattner emphasized that Modular isn’t aiming to topple Nvidia directly. “What we’re focused on is not like pushing down Nvidia or crushing them. It’s more about enabling a level playing field so that other people can compete,” he said.

The company plans to sell its software directly to enterprises on a usage-based model and through revenue-sharing deals with cloud providers. Investors are betting that a multi-vendor AI hardware future is inevitable. DFJ Growth partner Sam Fort described Modular as “VMware for the AI era,” enabling workloads to move seamlessly across different chip vendors.

With around 130 employees, Modular plans to use the new capital to grow its engineering and sales teams and to expand beyond AI inference into the more demanding AI training market.