Home > News > Hardware

Google Unveils In-House Arm Server CPU: 50% Performance Boost Over x86, 60% Higher Efficiency

Lang Ke Jian Sat, Apr 13 2024 08:51 AM EST

At the "Cloud Next 2024" conference held by Google on April 9th, the tech giant made its debut in unveiling Axion, its first in-house Arm architecture CPU designed specifically for data centers.

Rumors had been circulating as early as February 2023 that Google was developing two different Arm server CPUs.

One, codenamed "Maple," is based on Marvell's technology and may leverage the tumultuous journey of the "Triton" ThunderX3 and its successor, ThunderX4.

The second project, codenamed "Cypress," is being designed by Google's team in Israel. Sf0aca3a0-7563-4878-9930-0ec2b0069c53.jpg According to The Next Platform, both Maple and Cypress were developed under the leadership of Uri Frank, who joined Google in March 2021 from Intel to become the Vice President of server chip design.

Frank worked at Intel for more than two and a half years, holding positions in engineering and management, eventually overseeing the design of several core chips for personal computers.

Google's Axion CPU uses the Arm "Demeter" V2 core, rather than the "Poseidon" V3 or "Hermes" N3 cores that Arm released in February of this year. S3ddda884-2ec3-43d4-9ad8-68be7fb61370.jpg Google's Vice President of Engineering for Machine Learning, Systems, and Cloud AI, Amin Vahdat, stated, "The Axion processor combines Google's chip expertise with Arm's top-performing CPU cores, delivering a 30% performance boost compared to the fastest Arm-based general-purpose instances in today's cloud. Compared to similar instances based on X86, performance is increased by 50%, with a 60% improvement in energy efficiency." Se0e2e9f0-0a3b-49f2-9e3a-1fdfa8dabfac.jpg Google Cloud's Vice President and General Manager for Computing and Machine Learning Infrastructure, Mark Lohmeyer, stated, "Google Axion is built on an open foundation, allowing customers using the Arm architecture anywhere to easily adopt the Axion CPU without the need to redesign or rewrite their applications."

According to Google Cloud CEO Thomas Kurian, the Axion chip has already been deployed internally at Google (possibly on a limited basis) to support BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine, and the YouTube advertising platform. Later this year, these instances will be available for customers to rent directly on Google Cloud.

Additionally, in December last year, Google Cloud launched its next-generation AI accelerator, TPU v5p, which is designed for training some of the largest and most demanding generative AI models.

A single TPU v5p Pod contains 8,960 chips, more than double the number of chips in a TPU v4 Pod. S8705b15d-17d4-4f57-a09f-93a2a4262e42.png Google Cloud does not sell chips such as the Axion CPU and TPU v5p directly to the public; instead, these are available for enterprise clients to rent or use within application services.

It is worth noting that many cloud service providers, including Amazon, Microsoft, Alibaba Cloud, and Baidu, rely on chips from manufacturers like Nvidia, Intel, and AMD. However, they also develop their own custom Arm server chips and AI accelerators. This approach allows them to better serve their customers, reduce dependence on external suppliers, and cut costs.