Yazılar

Besi Raises Long-Term Financial Targets on Growing AI Chip Demand

BE Semiconductor Industries (Besi) has raised its long-term revenue and operating margin targets ahead of its investor day, citing strong demand from AI chipmakers adopting its advanced hybrid bonding technology. The Dutch company specializes in the world’s most precise hybrid bonding tools, a key technology for stacking multiple chips directly on top of each other to boost performance.

At the event, Besi’s Senior Vice President Technology Chris Scanlan highlighted that major AI chip designers Nvidia and Broadcom are looking to utilize Taiwan Semiconductor Manufacturing Co’s (TSMC) hybrid bonding process, which could increase demand for Besi’s equipment. Additionally, Intel and AMD are expanding their use of hybrid bonding technology.

Besi now projects long-term revenues between €1.5 billion and €1.9 billion ($1.73 billion to $2.19 billion), up from a previous forecast of €1 billion, and expects operating margins of 40% to 55%, an increase from 35% to 50%. Shares rose 8.4% during the trading day, outperforming the Netherlands’ AEX index.

As traditional performance gains from shrinking chip features approach physical limits, the industry is shifting towards advanced packaging methods like hybrid bonding to create faster, more powerful chips. Limits on reticule exposure in ASML’s lithography machines have also pushed chipmakers to combine multiple chips by stitching or stacking. For example, TSMC recently demonstrated a large package containing over 16 chips stitched together.

While Besi and its investors are optimistic about the company’s position as a key supplier to cutting-edge chipmakers, some analysts expressed caution. Degroof Petercam noted that Besi’s raised targets come despite the company not yet reaching its earlier goals. So far this year, Besi shares have declined by 3.2%.

Nvidia’s New AI Chips Slash Training Times for Massive AI Models

Nvidia’s latest generation of AI chips is making significant advances in training some of the world’s largest artificial intelligence systems, according to new benchmark data released on Wednesday by MLCommons, a nonprofit organization that tracks AI system performance.

The results show a dramatic drop in the number of chips required to train large language models (LLMs), highlighting Nvidia’s growing technological lead in this critical area of AI development. While much of the financial market’s current focus is on the booming sector of AI inference—where AI models answer user queries—training remains a core competitive battleground, especially for developing next-generation models with trillions of parameters.

Blackwell Chips Outperform Previous Generations

Nvidia’s new Blackwell chips demonstrated superior performance over its previous Hopper generation. In tests involving Meta Platforms’ open-source Llama 3.1 405B model, which is complex enough to simulate some of the most demanding AI training workloads, Nvidia’s Blackwell chips completed training tasks with more than double the speed per chip compared to Hopper.

In one benchmark, a system using 2,496 Blackwell chips completed the training run in just 27 minutes. By comparison, even though more than three times as many Hopper chips were used in previous tests, they only achieved faster results due to sheer scale rather than efficiency.

Nvidia and its partners were the only ones to submit data for models of this size, giving Nvidia a clear demonstration of its leadership in training capabilities for multi-trillion parameter models.

Changing Industry Trends in AI Training

Chetan Kapoor, chief product officer of CoreWeave, which collaborated with Nvidia on the results, noted that AI companies are moving away from building vast, homogenous data centers with 100,000 or more identical chips. Instead, they are increasingly assembling smaller, specialized subsystems that handle different aspects of the training process. This modular approach allows companies to speed up training times and manage extremely large model sizes more efficiently.

“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor explained at a press briefing.

Global Competition Also Heating Up

While Nvidia maintains a dominant position, competitors around the world are also pushing for breakthroughs. For example, China’s DeepSeek has recently claimed it can create competitive chatbots while using far fewer chips than many U.S. rivals, adding to the growing international race for AI supremacy.

MLCommons’ report also included results from Advanced Micro Devices (AMD) and others, though Nvidia’s Blackwell system stood out in the training category.

Microsoft Surface Pro and Surface Laptop Featuring ARM-Based Chips Expected to Arrive in 2026

Microsoft is reportedly planning to introduce Arm-based processors from AMD in its next-generation Surface devices, according to recent leaks. These new models are expected to succeed the Surface Laptop 7 and Surface Pro 11, potentially launching in 2026. This move would mark a shift for Microsoft, which currently offers Surface products powered by Qualcomm’s Snapdragon X series chips as well as Intel’s x86 processors. The upcoming Arm-powered devices could come at a more affordable price point, aiming to attract users looking for budget-friendly options with efficient performance.

Details shared on the NeoGAF forums suggest that AMD is developing a new Arm-based APU, codenamed Sound Wave, specifically for Microsoft’s Surface lineup. This chip is said to be manufactured using TSMC’s advanced 3nm fabrication process, promising improvements in power efficiency and performance. The Sound Wave processor is expected to feature two high-performance cores alongside four efficiency cores, along with a 128-bit LPDDRx-9600 RAM controller. Reports indicate that Surface laptops with this APU might come equipped with 16GB of RAM, targeting users who require a balance of power and efficiency.

If these rumors hold true, the addition of AMD’s Arm-based chips could significantly expand the range of Windows on Arm (WoA) devices available next year. While Microsoft has offered Snapdragon-powered Surface devices for several years, this would be the first time AMD chips appear in a Surface PC since the company’s previous use of AMD hardware. The new processors could help Microsoft diversify its portfolio by offering a mix of affordable, low-power laptops alongside its premium Intel-based lineup.

This development also reflects a broader industry trend towards Arm architectures, which provide improved battery life and energy efficiency compared to traditional x86 chips. By adopting AMD’s Sound Wave processors, Microsoft could better compete in the growing market for lightweight, portable devices designed for everyday productivity and casual use. For customers, this means more choice and potentially lower prices without sacrificing performance or battery life.