in ,

Google Announces Compute Engine A3 Cloud-Based AI Supercomputer with up to 26,000 NVIDIA H100 Accelerators

Google Compute Engine A3 Supercomputer

Google announced the Compute Engine A3 cloud-based AI supercomputer at the Google I/O conference, which will offer up to 26,000 NVIDIA H100 accelerators to its customers. 

With this announcement, Google has demonstrated the importance it places on the competitive battle with Microsoft for AI supremacy. The A3 supercomputer is based on the Jupiter network structure, which connects geographically dispersed CPU clusters via fibre-optic communication channels. Each A3 virtual machine includes eight H100 accelerators interconnected using NVIDIA’s patented technology.

Cloud providers have realized that a one-size-fits-all approach is not enough and it requires an infrastructure specifically built for AI training. Therefore, systems based on H100 accelerators are currently being massively deployed, and NVIDIA launched its own DGX cloud service in March. The cost of renting which is significantly higher compared to renting systems based on A100 accelerators of the previous generation. The H100 is claimed to be 30% more economical and 3 times faster than the NVIDIA A100, for example, in training a large MosaicGPT language model with seven billion parameters.

Google Cloud customers will be able to train and run AI applications through A3 virtual machines and use Google AI development and management services available through Vertex AI, Google Kubernetes Engine, and Google Compute Engine. The A3 virtual machines are based on the fourth generation Intel Xeon chips (Sapphire Rapids), which work in conjunction with H100 accelerators.

Microsoft’s AI supercomputer in Azure, built in collaboration with OpenAI, has 285,000 CPU cores and 10,000 GPU accelerators. Microsoft also announced its next-generation AI supercomputer with even more GPUs, which is built on the same NVIDIA technologies as A3. Microsoft’s supercomputer boasts networking capabilities from chip maker Quantum-2 at speeds up to 400 Gbps.

While Google continues to advertise its own TPU v4 AI chips used to run internal AI applications such as Google Bard, NVIDIA GPU accelerators have become the de facto standard for building AI infrastructure. NVIDIA CUDA parallel programming tool delivers the fastest results with dedicated AI cores and H100 graphics cores.

Avatar

Written by Akshay Prakash

Akshay Prakash is a BCA graduate and a passionate newbie programmer. With a keen interest in the ever-evolving world of coding, he delves into the latest trends, updates, and innovations within the coding sphere. Through his captivating and informative writing, Akshay brings coding news to life, making complex concepts accessible to readers of all backgrounds.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *

Anthropic Constitutional AI

Anthropic’s Constitutional AI: A New Approach to Addressing Ethical and Social Challenges in AI Systems

Google MusicLM

Google MusicLM: An Experimental AI Tool That Generates Music from Text Descriptions