Google: New C3 Instances are ‘Next Wave’ for Google Cloud Infrastructure

Oct. 11, 2022
Google Cloud has rolled out new virtual machine instances based on Intel’s latest processors, calling the C3 VM “the next wave of Google Cloud infrastructure” and a key step towards creating more powerful automation options for cloud operators.

Google Cloud has rolled out new virtual machine instances based on Intel’s latest processors, calling the C3 VM “the next wave of Google Cloud infrastructure” and a key step towards creating more powerful automation options for cloud operators. The company also announced five new cloud regions, continuing the rapid expansion of its global network.

The announcements were part of a news avalanche for Google Cloud Next, the company’s annual cloud technology event. Here are some details on the infrastructure-focused updates.

Updated VMs Offer Latest Intel Processors

Google said it has begun partner pilots of the new C3 machine series powered by the 4th Gen Intel Xeon Scalable processor and Google’s custom Intel Infrastructure Processing Unit (IPU). By combining CPUs and an IPU, the C3 architecture can provide more efficient use of processing resources, adopting an accelerator model that has been popularized in high-performance computing (HPC) and has been widely used for AI workloads.

The IPU is an ASIC (application specific integrated circuit),  a programmable network device that accelerates system-level infrastructure resources by securely accelerating those functions, which is particularly useful in managing microservices. It’s an example of how powerful new chips are enabling specialized approaches to boost the performance of IT infrastructure.

The company said the C3 machine instances deliver strong performance gains to enable high performance computing and data-intensive workloads, especially when paired with the recently-announced Hyperdisk block storage.

“Meeting the rapidly growing demands of our customers’ high performance computing and data-intensive workloads requires deep innovation,” said Nirav Mehta, Sr. Director of Product Management, Cloud Infrastructure Solutions. “At Google Cloud, we know we can’t rely on ever-faster CPUs alone, like Moore’s Law has enabled in the past.

“Customers can either optimize their workloads for a given platform, or we can offer them a platform that is optimized for their specific needs,” Mehta added. “At Google Cloud, we choose the latter.”

As we’ve previously noted, cloud platforms have been a game changer in transforming the hardware refresh – the tech industry ritual of buying new servers to take advantage of newer, faster processors. Gone are the days of buying and shipping physical servers and then installing it in a data center. Cloud platforms now design and deploy custom hardware, allowing the IT refresh to be driven by changes in a software console.

Google says early C3 customers report an increase in performance over the previous generation C2.

“We were pleased to observe a 20% increase in performance over the current generation C2 VMs from Google Cloud in testing with one of our key workloads,” said Aaron Sheldon, Sr. Software Engineer, Snap Inc. “These continued performance improvements enable better end user experience and application cost efficiency.”

The Google announcement highlighted its collaboration with Intel.

“We are pleased to have co-designed the first ASIC Infrastructure Processing Unit with Google Cloud for its new C3 machine series,” said Nick McKeown, Senior Vice President, Intel Fellow and General Manager of Network and Edge Group. “A first of its kind in any public cloud, C3 VMs will run workloads on 4th Gen Intel Xeon Scalable processors while they free up programmable packet processing to the IPUs securely at line rates of 200Gb/s. This Intel and Google collaboration enables customers through infrastructure that is more secure, flexible, and performant.”

Mehta said the C3 instances are part of the ongoing effort to build a more efficient, manageable cloud at Internet scale.

“With the exponential rise in the complexity of cloud infrastructure, we as an industry must turn to automation to manage these platforms efficiently at scale,” said Mehta. “Along with Infrastructure as Code, custom chips like the Titan, the TPU and the IPU, pave the way for a not-so-distant future where we’ll automate over half of all infrastructure decisions, configuring systems dynamically in response to usage patterns.”

New Cloud Regions Keep Coming

Google Cloud’s infrastructure now spans 35 regions, 106 zones, 173 network edge locations and 22 subsea cables serving customers in over 200 countries and territories. Today the company said it will bring Google Cloud regions to six more countries: Austria, Czech Republic, Greece, Norway, South Africa, and Sweden.

The expansion into Sweden was hailed by one large customer.

“As a global business, based in Sweden, we are pleased to see continued investment in Scandinavia, and Sweden in particular, and with it the continued and important progress of sustainability in the infrastructure world,” said Tyson Singer, Vice President of Technology and Platforms for Spotify. “As a cutting edge technology company, Spotify values its partnership with Google Cloud as an important part of our ongoing growth and innovation journey.”

Moving Mainframe Workloads into the Cloud

Google Cloud announced Dual Run, a mainframe modernization solution that it says “dramatically simplifies and reduces risk of enterprises’ migrations of legacy mainframe systems to the cloud.”

Why mainframes? Google says 44 of the top 50 banks and 23 of the top 25 largest U.S. retailers still use mainframes to run their businesses. Dual Run is built on top of unique technology developed by Banco Santander, one of the largest banks in the world. Banco Santander has begun utilizing Dual Run to bring data and workloads onto Google Cloud’s infrastructure.

“Modernizing legacy IT infrastructure represents a significant stepping stone into the cloud era for many enterprises,” said Sachin Gupta, VP and GM, Infrastructure, Google Cloud. “By moving mainframe systems to the cloud, organizations have an opportunity to better utilize their data, implement stronger cybersecurity protections, and build a foundation for their digital transformations that will drive their future growth. Dual Run simplifies this process and reduces any associated risks.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Cadence Design Systems

Are You Ready for the Storm About to Hit the Data Center Industry?

Mark Fenton, Senior Product Engineering Manager at Cadence Design Systems, explains how a campus-wide digital twin for data centers is one way to revolutionize performance and...

White Papers

Get the full report

A Modern Approach to Disaster Recovery and Business Continuity: What You Need to Know

July 8, 2022
DartPoints presents three questions to consider when creating disaster recovery and business continuity plans.