Yazılar

Cadence Unveils Nvidia-Based Supercomputer to Accelerate Engineering and Biotech Design

Cadence Design Systems (CDNS.O) unveiled a powerful new supercomputer on Wednesday, built with Nvidia’s latest Blackwell GPUs, to dramatically speed up complex simulations in chip design, aerospace, and biotech research. The Millennium M2000, Cadence’s newest system, represents a major leap forward as the company expands beyond traditional chip design into engineering, drug discovery, and system modeling.

Key Details:

  • Millennium M2000 Supercomputer

    • Powered by ~32 Nvidia Blackwell GPUs

    • Target price: ~$2 million per unit

    • Dramatic simulation improvements: e.g., 8-day CPU job completed in <24 hours

    • Builds on Cadence’s 2023 system, now covering a broader software suite

There’s this insatiable need for faster simulation,” said Michael Jackson, VP at Cadence, noting its use with Boeing to analyze turbulence around parts of a 777 jet.

Strategic Use Cases:

  • Aerospace: Assisting Boom Supersonic and Boeing in aircraft design

  • Biotech: Partnering with Treeline Biosciences for molecule simulation

  • Semiconductors: Continuing its core work with clients like Apple for chip design

Industry Impact:

At a Santa Clara event, Nvidia CEO Jensen Huang announced Nvidia will purchase 10 M2000 systems for its internal chip and AI data center development.

This is a big deal for us… We’ll speed it up 50, 60, 100 times,” Huang said.

Cadence’s move to GPU-optimized computing is a major milestone in engineering software, shifting away from older CPU-centric architectures to embrace AI-powered, accelerated computing, ensuring faster innovation cycles in science and hardware design.

Nvidia CEO Predicts Humanoid Robot Revolution Within Five Years

Nvidia CEO Jensen Huang has predicted that humanoid robots will become widely used in manufacturing within the next few years, much sooner than many expect. Speaking at the company’s annual developer conference in San Jose, California, Huang unveiled new software tools designed to help robots navigate real-world environments more effectively.

In a conversation with journalists after his keynote address, Huang emphasized that the widespread presence of humanoid robots is not a long-term vision but an imminent reality. He suggested that manufacturing will be the first industry to adopt these robots due to its structured environment and well-defined tasks, making automation more feasible.

Factories provide a controlled setting where humanoid robots can be integrated with minimal disruption, Huang explained. He also highlighted the economic advantages, noting that the cost of renting a humanoid robot could be around $100,000, making them a viable alternative to human labor in certain roles.

Nvidia’s advancements in AI and robotics continue to drive innovation in automation, with the company at the forefront of enabling next-generation robotic systems.

Nvidia to Invest Billions in U.S. Chip Production Over Four Years

Nvidia (NVDA.O) plans to invest hundreds of billions of dollars in U.S.-made chips and electronics over the next four years, CEO Jensen Huang told the Financial Times. The company expects to spend around $500 billion on electronics during this period, with a substantial portion allocated to domestic manufacturing.

Huang emphasized that the U.S. AI industry could expand more rapidly with support from government policies. His comments come as Nvidia seeks to address investor concerns about demand for its high-cost AI chips, especially following the emergence of China’s DeepSeek chatbot as a potential competitor.

While Nvidia declined to comment on the FT report, Huang stated that the company can now manufacture its latest systems in the U.S. through key suppliers like Taiwanese chipmakers TSMC (2330.TW) and Foxconn (2317.TW). He also noted an increasing competitive threat from China’s Huawei.

Huang highlighted that TSMC’s U.S. investments significantly strengthen Nvidia’s supply chain resilience. Earlier, at Nvidia’s developer conference in California, he told analysts that orders for 3.6 million Blackwell AI chips from four major cloud firms likely underestimate actual demand, as they do not account for customers such as Meta Platforms (META.O), smaller cloud providers, and startups.