Yazılar

Nvidia Supplier Wistron Says AI Boom Is Not a Bubble

Artificial intelligence is not a speculative bubble and demand linked to the technology will continue to accelerate, according to Wistron chairman Simon Lin. Speaking in Taipei, Lin said AI-related order growth in 2026 is expected to exceed last year’s levels, reflecting what he described as a structural shift rather than a temporary surge.

Wistron, a key supplier to Nvidia, sees strong demand extending well into 2027. Lin said the company expects “significant” growth this year compared with the previous one, adding that AI is already transforming a wide range of industries and marking the beginning of a new technological era.

The company is expanding its manufacturing footprint in the United States to support Nvidia’s long-term AI ambitions. Wistron said new U.S. facilities are on track to be ready in 2026, with volume production starting in the first half of this year. Part of the capacity will support Nvidia’s plan to build up to $500 billion worth of AI servers in the U.S. over the next four years.

Nvidia previously said it would build supercomputer manufacturing plants in Texas, working with partners including Foxconn and Wistron. The comments from Wistron’s leadership underline growing confidence among AI supply-chain firms that current demand reflects long-term structural growth rather than a short-lived boom.

Nvidia unveils AI models for faster, cheaper weather forecasts

Nvidia has released three open-source artificial intelligence models designed to improve the speed and cost efficiency of weather forecasting. The announcement was made at the American Meteorological Society’s annual meeting, highlighting the chipmaker’s broader push to apply AI software beyond traditional computing workloads.

The new models aim to replace conventional weather simulations, which are often expensive and time-consuming to run. Nvidia said its AI-driven approach can match or exceed the accuracy of traditional methods while delivering results significantly faster and at a lower operational cost once the models are trained.

One of the key commercial use cases is expected to be in the insurance sector, where companies rely on large-scale weather simulations to assess rare but damaging events such as floods and hurricanes. Traditional forecasting requires running large ensembles of simulations, a process that can be slow and costly. Nvidia said AI removes this bottleneck by enabling massive ensembles to be processed at unprecedented speed.

The models are part of Nvidia’s Earth-2 initiative and include tools for 15-day global forecasts, short-term severe storm prediction over the United States, and systems that combine data from multiple weather sensors to improve forecasting accuracy.

Microsoft rolls out next generation of its AI chips, takes aim at Nvidia’s software

Microsoft has unveiled the second generation of its in-house artificial intelligence chip, Maia 200, alongside new software tools designed to challenge Nvidia’s dominance among AI developers. The chip is going live this week at a Microsoft data center in Iowa, with a second deployment planned in Arizona, marking a key step in the company’s effort to reduce reliance on external chip suppliers.

The Maia 200 follows Microsoft’s first Maia chip introduced in 2023 and arrives as major cloud providers increasingly develop their own AI hardware. Companies such as Google and Amazon Web Services, traditionally large Nvidia customers, are now rolling out custom chips that compete directly with Nvidia’s offerings. The shift reflects growing demand for tailored AI infrastructure optimized for large-scale cloud workloads.

Alongside the new chip, Microsoft announced a suite of software tools to support developers, including Triton, an open-source programming framework that performs similar functions to Nvidia’s widely used Cuda software. By strengthening its software ecosystem, Microsoft is targeting what many analysts view as Nvidia’s most significant competitive advantage.

The Maia 200 is manufactured by Taiwan Semiconductor Manufacturing Company using advanced 3-nanometer technology and incorporates high-bandwidth memory. Microsoft has also emphasized the use of SRAM, a fast memory type that can improve performance for AI systems handling large volumes of user requests, a design choice increasingly favored by Nvidia’s emerging competitors.