Yazılar

Samsung Begins Shipping HBM4 Chips to Boost AI Position

Samsung Electronics said it has started shipping its most advanced high-bandwidth memory chips, HBM4, as it seeks to close the gap with rivals in supplying critical components for artificial intelligence accelerators.

Demand for high-performance memory has surged amid the global buildout of AI data centers. HBM chips are essential for feeding large volumes of data into AI accelerators, including those developed by Nvidia. Samsung has previously trailed competitors such as SK Hynix in delivering earlier-generation HBM products.

Samsung said its HBM4 chips deliver a consistent processing speed of 11.7 gigabits per second, a 22% improvement over its HBM3E predecessor, with peak speeds reaching 13 Gbps to address growing data bottlenecks. The company added that it plans to provide samples of next-generation HBM4E chips in the second half of the year.

Shares of Samsung rose following the announcement, reflecting investor optimism about its efforts to regain momentum in the competitive AI memory market. SK Hynix, which has maintained a leading position in HBM production, has said it aims to preserve its strong market share as competition intensifies. Meanwhile, U.S.-based Micron Technology has also begun high-volume production and customer shipments of HBM4.

The rollout underscores intensifying competition among memory manufacturers as AI infrastructure expansion continues to drive demand for faster, more efficient chip technologies.

Microsoft rolls out next generation of its AI chips, takes aim at Nvidia’s software

Microsoft has unveiled the second generation of its in-house artificial intelligence chip, Maia 200, alongside new software tools designed to challenge Nvidia’s dominance among AI developers. The chip is going live this week at a Microsoft data center in Iowa, with a second deployment planned in Arizona, marking a key step in the company’s effort to reduce reliance on external chip suppliers.

The Maia 200 follows Microsoft’s first Maia chip introduced in 2023 and arrives as major cloud providers increasingly develop their own AI hardware. Companies such as Google and Amazon Web Services, traditionally large Nvidia customers, are now rolling out custom chips that compete directly with Nvidia’s offerings. The shift reflects growing demand for tailored AI infrastructure optimized for large-scale cloud workloads.

Alongside the new chip, Microsoft announced a suite of software tools to support developers, including Triton, an open-source programming framework that performs similar functions to Nvidia’s widely used Cuda software. By strengthening its software ecosystem, Microsoft is targeting what many analysts view as Nvidia’s most significant competitive advantage.

The Maia 200 is manufactured by Taiwan Semiconductor Manufacturing Company using advanced 3-nanometer technology and incorporates high-bandwidth memory. Microsoft has also emphasized the use of SRAM, a fast memory type that can improve performance for AI systems handling large volumes of user requests, a design choice increasingly favored by Nvidia’s emerging competitors.

Samsung to Start HBM4 Production for Nvidia Supply

Samsung Electronics plans to begin production of its next-generation high-bandwidth memory chips, known as HBM4, next month and supply them to Nvidia, a person familiar with the matter told Reuters.

The move marks a key step in Samsung’s efforts to close the gap with local rival SK Hynix, which has emerged as the primary supplier of advanced memory used in Nvidia’s AI accelerators. Earlier supply delays had weighed on Samsung’s earnings and share price last year.

Samsung shares rose 2.2% in morning trade, while SK Hynix shares fell 2.9%. The source declined to disclose shipment volumes. Samsung declined to comment, and Nvidia was not immediately available for comment.

South Korean newspaper Korea Economic Daily reported that Samsung recently passed HBM4 qualification tests for Nvidia and AMD, and is set to begin shipments to Nvidia next month, citing industry sources.

SK Hynix said in October it had completed supply talks with major customers for next year and plans to deploy silicon wafers at its new M15X fab in Cheongju starting next month. It has not confirmed whether HBM4 will be part of the initial output.

Both Samsung and SK Hynix are due to report fourth-quarter earnings later this week, when further details on HBM4 orders are expected. Nvidia CEO Jensen Huang has said the company’s next-generation Vera Rubin AI platform is already in full production and will be paired with HBM4 chips later this year.