Yazılar

Broadcom unveils Thor Ultra networking chip to challenge Nvidia in AI data centers

Broadcom has launched its new Thor Ultra networking chip, designed to help companies build massive artificial intelligence computing systems by linking together hundreds of thousands of processors — escalating its rivalry with Nvidia in the race to dominate AI infrastructure.

Unveiled on Tuesday, the Thor Ultra chip enables data center operators to connect far more AI processors than before, making it easier to train and deploy large models like OpenAI’s ChatGPT. The launch follows Broadcom’s announcement on Monday of a major deal to deliver 10 gigawatts of custom chips for OpenAI starting in 2026, further challenging Nvidia’s dominance in AI accelerators and networking technologies.

“The network plays an extremely important role in building these large clusters,” said Ram Velaga, Broadcom’s senior vice president. “So I’m not surprised that anybody in the GPU business wants to participate in networking.”

AI has become a $60 billion to $90 billion market opportunity for Broadcom by 2027, according to CEO Hock Tan, split between networking chips and custom data center processors built for companies such as Google and OpenAI. In 2024, Broadcom reported $12.2 billion in AI revenue, and in September it disclosed a $10 billion unnamed customer for its AI chips.

The Thor Ultra doubles the bandwidth of its predecessor and acts as a vital link between AI systems and the rest of the data center, improving data transfer speeds and scalability. Engineers developed it alongside Broadcom’s Tomahawk networking switches, refining every detail from power consumption to thermal management.

While Broadcom does not sell servers directly, it provides reference designs for partners to build upon. “For every dollar we invest in our silicon, our ecosystem partners invest six to ten times more,” Velaga said, emphasizing the company’s design-first strategy in the AI infrastructure market.

Broadcom Soars on $10B AI Chip Deal, Likely With OpenAI

Broadcom shares surged 15% Friday after unveiling a $10 billion AI chip order from a new, unnamed customer—an announcement that cements its role as a key custom chip supplier in the race to expand generative AI infrastructure. The blockbuster order immediately sparked speculation that the buyer is OpenAI, with analysts at J.P. Morgan, Bernstein, and Morgan Stanley pointing to the timing and scale of the deal.

If confirmed, the partnership would mark OpenAI’s biggest move yet toward developing its own in-house processors, reducing reliance on Nvidia and AMD, whose stock prices dipped 2% and 5% respectively after Broadcom’s news. Reuters previously reported that OpenAI had been working with Broadcom on a custom chip project.

The deal highlights Big Tech’s broader trend of diversifying away from Nvidia’s costly, supply-constrained GPUs. Microsoft, Amazon, Google, and Meta are already designing their own silicon. Broadcom, which already supplies custom AI chips to Google and Meta, now appears positioned to capture even more of the rapidly expanding market.

The rally added more than $200 billion to Broadcom’s valuation, boosting its market cap above $1.44 trillion. Analysts now forecast Broadcom’s AI revenue could surpass $40 billion in fiscal 2026, far above last quarter’s $30 billion projection.

Adding to investor optimism, longtime CEO Hock Tan confirmed he would remain in charge for at least another five years. Under his leadership, Broadcom has transformed into a central player in the global AI supply chain.

OpenAI to Debut First AI Chip in 2026 With Broadcom Partnership

OpenAI will launch its first in-house artificial intelligence chip in 2026 through a partnership with U.S. semiconductor leader Broadcom (AVGO.O), according to the Financial Times. The chip will be used internally to power OpenAI’s own AI systems rather than being sold to external customers, people familiar with the matter said.

The move reflects OpenAI’s push to diversify away from Nvidia, whose GPUs currently dominate AI computing, and to lower infrastructure costs amid surging demand for training and running large-scale AI models like ChatGPT. OpenAI has previously collaborated with Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) on design and fabrication, while also supplementing with AMD and Nvidia chips.

Reuters earlier reported that OpenAI was finalizing the design of its first custom silicon, to be manufactured at TSMC, with a focus on reducing reliance on outside suppliers. By developing its own chip, OpenAI joins rivals Google, Amazon, and Meta, which have already rolled out proprietary processors to handle escalating AI workloads.

The timing of the news coincides with Broadcom CEO Hock Tan’s announcement on Thursday that the company had secured over $10 billion in AI infrastructure orders from a new unnamed customer, set to drive significant revenue growth in fiscal 2026. Industry watchers say OpenAI could be that customer, given its scale and need for dedicated compute.

If successful, the partnership would not only help OpenAI gain greater control over its AI infrastructure but also cement Broadcom’s position as a leading custom silicon provider in the generative AI era.