In early 2025, Nvidia’s next-gen Blackwell platform is set to arrive on Google Cloud.
Google Cloud Platform’s announcements at Google Cloud Next in Las Vegas this year are predominantly centered around new instance types and accelerators, particularly focusing on AI capabilities. Alongside the introduction of custom Arm-based Axion chips, Google is unveiling a range of AI accelerators, both developed in-house and in collaboration with Nvidia.
While Nvidia recently announced its Blackwell platform, Google is not yet prepared to offer these machines. Instead, Google is planning to integrate support for Nvidia’s high-performance HGX B200 for AI and high-performance computing (HPC) workloads, as well as the GB200 NBL72 for large language model (LLM) training, in early 2025. Notably, Google’s announcement reveals that the GB200 servers will feature liquid cooling, highlighting the company’s commitment to optimizing performance and efficiency in AI computing infrastructure.
Although this announcement may seem premature, Nvidia has indicated that its Blackwell chips won’t be publicly available until the last quarter of this year. Therefore, Google’s forward-looking approach aims to prepare its infrastructure for the impending availability of Nvidia’s latest hardware, ensuring that customers can leverage cutting-edge AI capabilities as soon as they become accessible.
Overall, Google Cloud Platform’s focus on introducing new instance types and accelerators underscores its commitment to advancing AI capabilities and providing customers with scalable, high-performance computing solutions to meet diverse workload requirements. By embracing emerging technologies and partnerships, Google aims to empower organizations to harness the full potential of AI-driven innovation in the cloud.