in ,

Amazon Unveils Trainium2 and Graviton4 Chips to Boost AI in Data Centers

Amazon Unveils Trainium2 and Graviton4

Amazon has announced the launch of two new chips, Trainium2 and Graviton4, aimed at enhancing artificial intelligence (AI) capabilities in its data centres. This move positions Amazon Web Services (AWS) as a formidable competitor against Microsoft in the high-performance computing and machine learning sectors.

Trainium2:

Trainium2, the first of the duo, is a machine learning chip designed specifically for training generative AI models. Boasting a performance four times faster than its predecessor, Trainium2 offers up to 200 teraflops of performance. This chip is planned for deployment in scalable EC2 UltraClusters, potentially reaching up to 100,000 chips.

A notable aspect of Trainium2 is its cost and energy efficiency. Amazon claims that businesses can expect a 50% reduction in deployment costs and a 29% decrease in energy consumption. This chip represents an evolution from the first generation of machine learning chips introduced in late 2020, aiming to enhance performance and reduce reliance on GPU-based instances.

Graviton4:

Graviton4, the second chip, is designed for high-performance servers, open-source databases, and high-computing tasks. It features 96 Neoverse V2 cores, 2 MB of L2 cache per core, and 12 channels of DDR5-5600 memory. This translates to a 30% improvement in performance and a 75% increase in memory bandwidth. Graviton4 also boasts advanced security features, including always-on memory encryption and dedicated caches for each vCPU.

David Brown, AWS’s Director of Computing and Networking, emphasized the significance of Graviton4, stating it as the most powerful and energy-efficient chip they’ve ever built. He also highlighted the growing interest in generative AI, positioning Trainium2 as a key tool for clients to train ML models more efficiently.

These developments are part of Amazon’s broader strategy to develop proprietary chips to boost AI applications. The introduction of these chips is a clear signal of Amazon’s ambition to lead in the AI space, competing head-on with tech giants like Microsoft and its recently announced Maia 100 and Cobalt 100, two processors for data centres.

The arrival of these chips is particularly beneficial for companies like Anthropic, an AI firm founded by former OpenAI employees. Anthropic recently announced Claude 2.1, a language model capable of processing up to 150,000 words per entry, which could greatly benefit from Amazon’s new chips.

Vishak

Written by Vishak

Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *

Small Language Models Outshines Larger Models

Orca 2: Microsoft’s Small Language Models Outshines Larger Models with Superior Performance

Amazon Q

Amazon Q: A New AI Chatbot for Streamlining Business Operations