Google recently unveiled details about its latest data center artificial intelligence chip iteration and introduced an Arm-based central processing unit (CPU).
These developments mark significant advancements in Google’s efforts to enhance computing capabilities and offer innovative solutions for AI-driven workloads in data centers.
Significance of the Announcement:
Google’s tensor processing units (TPUs) provide a competitive alternative to Nvidia’s AI chips, although they are currently accessible only through Google’s Cloud Platform.
The introduction of the Arm-based CPU, named Axion, via Google Cloud, presents a new avenue for customers seeking superior performance compared to traditional x86 chips.
Key Quotes:
Mark Lohmeyer, Vice President and General Manager of Compute and Machine Learning Infrastructure at Google Cloud emphasized the simplicity of adopting Axion for existing workloads on Arm, stating, “We’re making it easy for customers to bring their existing workloads to Arm.”
Context:
Google’s move aligns with its competitors’ strategies, such as Amazon.com and Microsoft, which have developed Arm CPUs to differentiate cloud computing services.
While Google previously designed custom chips for various purposes, Axion represents its first venture into CPU development.
By the Numbers:
- The new TPU v5p chip is optimized for deployment in pods containing 8,960 chips, delivering twice the raw performance compared to the previous generation of TPUs.
- Axion offers significant performance improvements over general-purpose Arm and current x86 chips, boasting 30% and 50% better performance, respectively.
- Google employs liquid cooling to ensure optimal performance of the TPU pods.
Future Outlook:
Axion is already integrated into several Google services, such as YouTube Ads in Google Cloud, with plans for broader deployment to the public later this year.
The TPU v5p chip is now available via Google’s cloud platform, marking the beginning of its widespread adoption.