Broadcom unveils Thor Ultra networking chip to challenge Nvidia in AI data centers
Broadcom has launched its new Thor Ultra networking chip, designed to help companies build massive artificial intelligence computing systems by linking together hundreds of thousands of processors — escalating its rivalry with Nvidia in the race to dominate AI infrastructure.
Unveiled on Tuesday, the Thor Ultra chip enables data center operators to connect far more AI processors than before, making it easier to train and deploy large models like OpenAI’s ChatGPT. The launch follows Broadcom’s announcement on Monday of a major deal to deliver 10 gigawatts of custom chips for OpenAI starting in 2026, further challenging Nvidia’s dominance in AI accelerators and networking technologies.
“The network plays an extremely important role in building these large clusters,” said Ram Velaga, Broadcom’s senior vice president. “So I’m not surprised that anybody in the GPU business wants to participate in networking.”
AI has become a $60 billion to $90 billion market opportunity for Broadcom by 2027, according to CEO Hock Tan, split between networking chips and custom data center processors built for companies such as Google and OpenAI. In 2024, Broadcom reported $12.2 billion in AI revenue, and in September it disclosed a $10 billion unnamed customer for its AI chips.
The Thor Ultra doubles the bandwidth of its predecessor and acts as a vital link between AI systems and the rest of the data center, improving data transfer speeds and scalability. Engineers developed it alongside Broadcom’s Tomahawk networking switches, refining every detail from power consumption to thermal management.
While Broadcom does not sell servers directly, it provides reference designs for partners to build upon. “For every dollar we invest in our silicon, our ecosystem partners invest six to ten times more,” Velaga said, emphasizing the company’s design-first strategy in the AI infrastructure market.











