Yazılar

Cisco unveils AI-focused chip to link massive data centers

Cisco Systems has introduced a new networking chip, the P200, designed to connect large-scale AI data centers across vast distances. The technology, which will power a new generation of high-capacity routers, has already attracted major clients including Microsoft Azure and Alibaba Cloud, the company announced Wednesday.

The P200 aims to solve a growing challenge in artificial intelligence — connecting geographically distant data centers so they can operate as one massive computing system. “AI training jobs are now so large, they require multiple data centers working together — even a thousand miles apart,” said Martin Lund, Cisco’s executive vice president of common hardware.

The chip consolidates what previously required 92 separate components into one, allowing routers to use 65% less power. Cisco said the innovation helps AI firms manage rising energy demands as data centers spread to regions such as Texas and Louisiana, where electricity is more abundant.

The P200 will compete directly with Broadcom’s networking chips, offering faster data synchronization and more efficient buffering technology, a crucial feature for ensuring AI workloads remain stable across distributed systems.

Industry leaders including Microsoft’s Dave Maltz praised the move, saying the chip provides “faster networks with more buffering to absorb bursts of data,” critical for scaling AI operations. Cisco did not disclose investment costs or revenue expectations but said the chip represents a major leap in AI infrastructure efficiency.

TSMC and Chip Design Firms Use AI to Cut Energy Use in Next-Gen Chips

The chips powering artificial intelligence consume enormous amounts of electricity, but Taiwan Semiconductor Manufacturing Co (TSMC), the world’s largest contract chipmaker, unveiled new efforts on Wednesday to make them more efficient—by using AI-powered software in the chip design process.

Speaking at a Silicon Valley conference, TSMC showcased strategies it says could boost the energy efficiency of AI chips by as much as 10 times.

Nvidia’s flagship AI servers, for instance, can draw up to 1,200 watts under heavy workloads—comparable to the electricity used by 1,000 U.S. homes if run continuously. TSMC’s approach centers on a new generation of chiplet-based designs, where multiple smaller chips made with different technologies are packaged together to function as a single processor.

To enable these designs, chipmakers are increasingly turning to AI-driven software tools. Partners like Cadence Design Systems and Synopsys debuted new products on Wednesday, built in close collaboration with TSMC. These tools have shown they can outperform human engineers in solving complex design problems—and in a fraction of the time.

“That helps to max out TSMC technology’s capability, and we find this is very useful,” said Jim Chang, deputy director of TSMC’s 3DIC Methodology Group. “This thing runs five minutes while our designer needs to work for two days.”

Still, physical constraints remain. As chips scale up, moving data on and off them via traditional electrical connections is reaching its limits. New approaches, such as optical interconnects to transfer information between chips, must be made reliable enough for deployment in massive data centers.

“Really, this is not an engineering problem,” said Kaushik Veeraraghavan, an engineer at Meta’s infrastructure group during his keynote. “It’s a fundamental physical problem.”

Ambiq Micro Files for U.S. IPO Amid Rising Demand for AI-Efficient Chips

Ambiq Micro, a chip designer based in Austin, Texas, has filed for an initial public offering (IPO) in the United States, reporting a 16.1% increase in net sales for 2024. The company’s growth is being driven by rising demand for semiconductor technology fueled by the surge in generative artificial intelligence (AI) applications.

In its IPO filing, Ambiq Micro disclosed net sales of $76.1 million for 2024, up from $65.5 million the previous year, while narrowing its net loss to $39.7 million from $50.3 million in 2023. The company will list on the New York Stock Exchange under the ticker symbol “AMBQ.” BofA Securities and UBS are serving as the lead underwriters.

Despite strong sales growth and partnerships with major customers like Google and Huawei, the company faces risks due to high customer concentration, relying heavily on a small number of large clients, according to Lukas Muehlbauer, a research associate at IPOX.

Ambiq Micro specializes in ultra-low-power semiconductor solutions aimed at reducing power consumption challenges inherent in general-purpose and AI computing. This positions the company well in the growing market for “AI at the edge” devices, such as wearables, where energy efficiency is critical. Its chips reportedly reduce power use by 2 to 5 times compared to traditional designs, a significant advantage as AI computing typically demands substantial electricity.

The proceeds from the IPO are planned to support general corporate purposes, including working capital, sales and marketing, and product development. The broader IPO market is experiencing a revival, buoyed by strong investor interest in AI-focused technology firms expected to benefit from rapid growth driven by widespread adoption of generative AI.