Amazon is relying heavily on Arm Neoverse with the second-generation Graviton2, as well as customized instances for AI model training.
AWS CEO Andy Jassy heavily touted the importance of their custom silicon—the Nitro hypervisor silicon, a second version of the Graviton CPU using Arm Neoverse cores, and a new inference-focused Arm CPU for AI model training during the keynote address at re:Invent 2019 in Las Vegas.
The Graviton2 processors offer customized 64-bit Neoverse cores with a 7mm production process, designed in-house through Amazon's Annapurna Labs team, who were responsible for the first-generation Graviton. Graviton2-powered offerings include up to 64 vCPUs, 25 Gbps enhanced networking, and 18 Gbps EBS bandwidth. ZDNet's Larry Dignan has an in-depth look at the second-generation Graviton.
SEE: AWS Lambda: A guide to the serverless computing framework (free PDF) (TechRepublic)
Graviton2-powered instances are available in three configurations:
- General Purpose (M6g and M6gd) – 1-64 vCPUs and up to 256 GiB of memory.
- Compute-Optimized (C6g and C6gd) – 1-64 vCPUs and up to 128 GiB of memory.
- Memory-Optimized (R6g and R6gd) – 1-64 vCPUs and up to 512 GiB of memory.
Graviton2 "can deliver up to 7x the performance of the A1 instances, including twice the floating point performance. Additional memory channels and double-sized per-core caches speed memory access by up to 5x," Jeff Barr, chief evangelist for AWS, noted in a blog post.
Likewise, Barr touted performance increases between Graviton and Graviton2:
- SPECjvm® 2008: +43% (estimated)
- SPEC CPU® 2017 integer: +44% (estimated)
- SPEC CPU 2017 floating point: +24% (estimated)
- HTTPS load balancing with Nginx: +24%
- Memcached: +43% performance, at lower latency
- X.264 video encoding: +26%
- EDA simulation with Cadence Xcellium: +54%
The Nitro hypervisor silicon enables some of the advantages of Graviton2. While Nitro has been around for years, AWS has rearchitected their infrastructure around it, with Graviton2 built explicitly with Nitro in mind.
Inf1 instance for EC2, is proclaimed as "the fastest inferences in the cloud," with low latency, 3x higher throughput, and up to 40% lower cost-per-inference compared to G4, with support out of the box for TensorFlow, PyTorch, and MXNet. Inf1 is available in EC2 now, with support forthcoming for EKS and SageMaker.
Amazon's continued interest in building out their own silicon presents a problem for a number of hardware vendors, foremost among them Intel—considering Intel's own difficulties with moves to 10nm. The long-standing king of enterprise compute is facing attacks from all sides. AMD's reversal of fortunes with the Zen architecture—and AWS, Azure, and Oracle offering Zen-powered instances—makes them the obvious alternative to Intel, considering that they share the x86-64 instruction set, and migration requires nothing more than changing the instance type.
Fundamentally, the success of this will be determined by pricing—if Amazon prices Graviton2 instances significantly lower than Intel and AMD instances, users will adopt it. If not, uptake is likely to be low.
- Multicloud: A cheat sheet (TechRepublic)
- Hybrid cloud: A guide for IT pros (TechRepublic download)
- Serverless computing: A guide for IT leaders (TechRepublic Premium)
- Top cloud providers 2019: AWS, Microsoft, Azure, Google Cloud; IBM makes hybrid move; Salesforce dominates SaaS (ZDNet)
- Best cloud services for small businesses (CNET)
- Microsoft Office vs Google Docs Suite vs LibreOffice (Download.com)
- Cloud computing: More must-read coverage (TechRepublic on Flipboard)