Yazılar

MIT spinout Vertical Semiconductor raises $11 million to develop efficient AI power chips

Vertical Semiconductor, a startup spun out of the Massachusetts Institute of Technology (MIT), has raised $11 million in funding to commercialize a new generation of gallium nitride (GaN) power chips designed to deliver electricity more efficiently to artificial intelligence data centers, the company announced on Wednesday.

Led by Playground Global, the funding will help the company bring its vertical transistor architecture to market. The technology aims to reduce the massive energy losses that occur when power is converted from grid-scale voltages to the tiny levels needed by microchips—losses that typically generate significant amounts of heat instead of usable power.

“That is power you are not delivering to computing tasks—it straight turns into heat,” said Matt Hershenson, a partner at Playground Global.

AI data centers, which power tools like ChatGPT, consume enormous amounts of electricity—comparable to that of entire cities. As a result, chipmakers including Renesas, Infineon, and Power Integrations are partnering with Nvidia to develop next-generation GaN power chips.

Vertical Semiconductor’s innovation lies in stacking transistor components vertically rather than spreading them horizontally, making the chips smaller, more efficient, and cooler. The company plans to deliver prototypes this year and begin full production in 2026.

The firm was co-founded by MIT professor Tomas Palacios and researcher Joshua Perozek, whose doctoral work laid the foundation for the technology. CEO Cynthia Liao, formerly of MIT Sloan, said the company’s chips could offer data center operators step-change energy savings rather than incremental improvements.

“We do believe we offer a compelling next-generation solution that is not just a couple of percentage points here and there, but actually a step-wise transformation,” Liao said.

UK’s Nscale to supply Microsoft with 200,000 Nvidia AI chips in major data center deal

Nscale, a British artificial intelligence infrastructure company backed by Nvidia, announced on Wednesday that it will supply around 200,000 Nvidia AI chips to Microsoft under an expanded partnership aimed at scaling data center capacity across Europe and the United States.

While the financial details were not disclosed, the Financial Times reported that the deal could be worth up to $14 billion, based on similar contracts. The agreement will be executed in collaboration with Dell Technologies, which will help deploy the AI hardware across Microsoft’s hyperscale facilities.

The rollout will begin next year, with Nscale supplying Nvidia GPUs from its data centers in Texas and Portugal, the company said. The project also includes a joint venture with Norway’s Aker, which will provide 52,000 additional GPUs from Nscale’s hyperscale AI campus in Narvik, Norway.

The partnership reflects the surging demand for AI computing power, as tech giants including Microsoft, Meta, and Alphabet race to build infrastructure capable of training and deploying massive AI models. According to Citigroup, global AI-related infrastructure spending is expected to surpass $2.8 trillion by 2029.

Nscale, which raised $1.1 billion in September from investors including Aker and Finland’s Nokia, said the funds will accelerate its data center expansion and position the company as a key player in the global AI supply chain.

Broadcom unveils Thor Ultra networking chip to challenge Nvidia in AI data centers

Broadcom has launched its new Thor Ultra networking chip, designed to help companies build massive artificial intelligence computing systems by linking together hundreds of thousands of processors — escalating its rivalry with Nvidia in the race to dominate AI infrastructure.

Unveiled on Tuesday, the Thor Ultra chip enables data center operators to connect far more AI processors than before, making it easier to train and deploy large models like OpenAI’s ChatGPT. The launch follows Broadcom’s announcement on Monday of a major deal to deliver 10 gigawatts of custom chips for OpenAI starting in 2026, further challenging Nvidia’s dominance in AI accelerators and networking technologies.

“The network plays an extremely important role in building these large clusters,” said Ram Velaga, Broadcom’s senior vice president. “So I’m not surprised that anybody in the GPU business wants to participate in networking.”

AI has become a $60 billion to $90 billion market opportunity for Broadcom by 2027, according to CEO Hock Tan, split between networking chips and custom data center processors built for companies such as Google and OpenAI. In 2024, Broadcom reported $12.2 billion in AI revenue, and in September it disclosed a $10 billion unnamed customer for its AI chips.

The Thor Ultra doubles the bandwidth of its predecessor and acts as a vital link between AI systems and the rest of the data center, improving data transfer speeds and scalability. Engineers developed it alongside Broadcom’s Tomahawk networking switches, refining every detail from power consumption to thermal management.

While Broadcom does not sell servers directly, it provides reference designs for partners to build upon. “For every dollar we invest in our silicon, our ecosystem partners invest six to ten times more,” Velaga said, emphasizing the company’s design-first strategy in the AI infrastructure market.