Yazılar

Taiwan’s Wistron Targets Up to $923 Million in Luxembourg Share Sale

Taiwanese electronics manufacturer Wistron Corp is aiming to raise up to $923 million through the sale of global depository shares (GDS), according to a term sheet reviewed by Reuters. The GDS will be listed in Luxembourg, and trading is scheduled to begin on June 16.

Wistron, a key supplier to Nvidia, plans to issue up to 250 million depository shares priced between $36.20 and $36.93 each. This pricing represents a 4% to 6% discount compared to Wistron’s closing stock price of NT$115 ($3.85) on Thursday.

The company has not issued a public statement regarding the offering as of now. According to the term sheet, proceeds from the share sale will primarily be used to purchase raw materials denominated in foreign currencies—reflecting Wistron’s strategy to better manage currency risks tied to its international supply chain operations.

Expanding U.S. Presence for AI and High-Performance Computing

Wistron’s fundraising comes as it expands its operations to meet surging demand in the high-performance computing and AI sectors. Last month, the company announced that its new U.S. manufacturing facilities—being prepared for customer Nvidia—are expected to be operational next year. The facilities will focus on producing AI-related hardware and high-performance computing products.

The move aligns with Nvidia’s rapid growth in AI-driven technologies, as well as a broader industry shift toward more diversified and localized manufacturing capabilities, particularly in response to global supply chain disruptions.

Additionally, Wistron disclosed that it is actively engaged in discussions with other potential customers to expand its client base in these rapidly growing technology sectors.

Strategic Capital Raising Amid Currency Volatility

By raising funds through the GDS offering in Luxembourg, Wistron is diversifying its capital sources while also mitigating currency fluctuation risks. The global nature of its customer and supplier relationships makes access to foreign currency-denominated funds increasingly critical.

The GDS structure also allows Wistron to tap into a broader pool of international investors, while enhancing its financial flexibility to support ongoing expansion efforts in both manufacturing capacity and technological innovation.

Nvidia’s New AI Chips Slash Training Times for Massive AI Models

Nvidia’s latest generation of AI chips is making significant advances in training some of the world’s largest artificial intelligence systems, according to new benchmark data released on Wednesday by MLCommons, a nonprofit organization that tracks AI system performance.

The results show a dramatic drop in the number of chips required to train large language models (LLMs), highlighting Nvidia’s growing technological lead in this critical area of AI development. While much of the financial market’s current focus is on the booming sector of AI inference—where AI models answer user queries—training remains a core competitive battleground, especially for developing next-generation models with trillions of parameters.

Blackwell Chips Outperform Previous Generations

Nvidia’s new Blackwell chips demonstrated superior performance over its previous Hopper generation. In tests involving Meta Platforms’ open-source Llama 3.1 405B model, which is complex enough to simulate some of the most demanding AI training workloads, Nvidia’s Blackwell chips completed training tasks with more than double the speed per chip compared to Hopper.

In one benchmark, a system using 2,496 Blackwell chips completed the training run in just 27 minutes. By comparison, even though more than three times as many Hopper chips were used in previous tests, they only achieved faster results due to sheer scale rather than efficiency.

Nvidia and its partners were the only ones to submit data for models of this size, giving Nvidia a clear demonstration of its leadership in training capabilities for multi-trillion parameter models.

Changing Industry Trends in AI Training

Chetan Kapoor, chief product officer of CoreWeave, which collaborated with Nvidia on the results, noted that AI companies are moving away from building vast, homogenous data centers with 100,000 or more identical chips. Instead, they are increasingly assembling smaller, specialized subsystems that handle different aspects of the training process. This modular approach allows companies to speed up training times and manage extremely large model sizes more efficiently.

“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor explained at a press briefing.

Global Competition Also Heating Up

While Nvidia maintains a dominant position, competitors around the world are also pushing for breakthroughs. For example, China’s DeepSeek has recently claimed it can create competitive chatbots while using far fewer chips than many U.S. rivals, adding to the growing international race for AI supremacy.

MLCommons’ report also included results from Advanced Micro Devices (AMD) and others, though Nvidia’s Blackwell system stood out in the training category.

Oracle to Purchase $40 Billion Worth of Nvidia Chips for OpenAI’s US Data Center

Oracle is set to invest approximately $40 billion in purchasing Nvidia’s high-performance chips to support OpenAI’s new data center in the United States, according to a report by the Financial Times. This significant investment highlights the growing collaboration between cloud service providers and AI companies as they race to build advanced infrastructure to power next-generation artificial intelligence applications.

The new data center will be located in Abilene, Texas, and forms a critical part of the U.S. Stargate Project, a government-backed initiative involving leading AI companies aimed at strengthening America’s position in the global AI race. This project reflects the increasing emphasis on domestic AI capabilities amid intensifying competition with other countries developing their own AI technologies.

Oracle plans to acquire roughly 400,000 of Nvidia’s most advanced GB200 chips, which will be leased to OpenAI to provide the massive computing power required for AI workloads. While Oracle and OpenAI have not publicly commented on the deal, sources familiar with the arrangement confirmed the details to the Financial Times. Nvidia also declined to comment on the specifics.

The data center is expected to become fully operational by mid-2026, with Oracle securing a 15-year lease on the site. Financing for the project is backed primarily by JPMorgan, which has extended two loans totaling $9.6 billion, while the facility’s owners, Crusoe and Blue Owl Capital, have contributed approximately $5 billion in cash. This large-scale investment underscores the commitment of both private and public sectors to accelerate AI development on U.S. soil.