• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Machine Learning / NVIDIA Launches Grace CPU to Bring Arm Efficiency to Massive AI Workloads

NVIDIA Launches Grace CPU to Bring Arm Efficiency to Massive AI Workloads

By Rich Miller - April 12, 2021

NVIDIA Launches Grace CPU to Bring Arm Efficiency to Massive AI Workloads

The NVIDIA “Grace” CPU is designed to address the computing requirements for the world’s most advanced applications. (Image: NVIDIA)

LinkedinTwitterFacebookSubscribe
Mail

The King of the GPUs is getting into the CPU business. NVIDIA is entering the CPU market with an Arm-based processor that will be tightly integrated with its next-generation GPUs.

The NVIDIA Grace CPU was introduced today at the NVIDIA GTC 2021 conference, and made an immediate splash in the high performance computing (HPC) world, as Grace will be used in powerful new supercomputers being developed by Los Alamos National Labs and the Swiss National Supercomputing Centre.

With its new CPU, NVIDIA is dipping a toe in the water of a market dominated by Intel and AMD. For now, it is targeting the very top of the HPC sector, as Grace is designed for AI supercomputing involving enormous datasets. But Grace offers an all-NVIDIA solution that integrates CPUs and GPUs, a leading deployment model for accelerated HPC and AI computing systems, which have typically featured Intel and AMD CPUs.

“Today we’re introducing a new type of computer,” said Jensen Huang, founder and CEO of NVIDIA, in his keynote at GTC 2021.

NVIDIA’s goal is to use its integrated system to break new ground at the top of the market. For now, Grace will be implemented in the world’s most advanced computing systems, with the ability to adapt the Arm hardware to their needs – something that doesn’t broadly exist in the world of enterprise servers, which is dominated by x86. General availability of the Grace CPU is not expected until the beginning of 2023.

What’s clear is that Grace reflects NVIDIA’s ambition to play an even larger role as the tech landscape is transformed by AI. It is playing a long game, positioning itself for the evolution of massive datasets and AI models, which over time could make more data centers look more like supercomputing operations – what NVIDIA calls the “unified accelerated data center.” It also offers insights into the importance of Arm in NVIDIA’s roadmap.

“Leading-edge AI and data science are pushing today’s computer architecture beyond its limits – processing unthinkable amounts of data,” said Huang. “Using licensed Arm IP, NVIDIA has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing, and the ability to re-architect the data center to advance AI.”

Larger AI Models Require More Compute, Energy

In the GTC keynote, NVIDIA Huang said the giant computer models that power artificial intelligence are exploding in size, testing the limits of current computing architectures. Addressing this challenge is the theme for GTC 21, where NVIDIA is outlining product updates across the many verticals it serves, including HPC, enterprise computing, the automotive sector and software simulation for “digital twins.”

The Grace CPU is named for computing pioneer Grace Hopper, and based on the energy-efficient Arm microarchitecture found in billions of smartphones and edge computing devices. An intriguing element of the announcement is how Arm might improve the energy efficiency of the most powerful HPC hardware. Even modest gains in efficiency at the chip level can be amplified across massive systems,

NVIDIA is introducing the Grace CPU as the volume of data and size of AI models are growing exponentially. Today’s largest AI models include billions of parameters and are doubling every two-and-a-half months.

NVIDIA says a Grace-based system will be able to train a one trillion parameter natural language processing (NLP) model 10 times faster than today’s state-of-the-art NVIDIA DGX-based systems, which run on x86 CPUs. In addition, NVIDIA NVLink interconnect technology will provide up to 900 GB/s connection between the NVIDIA Grace CPU and NVIDIA GPUs to enable higher aggregate bandwidth.

“With an innovative balance of memory bandwidth and capacity, this next-generation system will shape our institution’s computing strategy,” said Thom Mason, Director of Los Alamos National Laboratory (LANL), a leading supercomputing center for the U.S. Department of Energy. “Thanks to NVIDIA’s new Grace CPU, we’ll be able to deliver advanced scientific research using high-fidelity 3D simulations and analytics with data sets that are larger than previously possible.”

LANL will integrate Grace into its design for a “leadership-class advanced technology supercomputer” scheduled to be delivered in 2023. Meanwhile, the Swiss National Computing Centre (CSCS) will work with NVIDIA and Hewlett Packard to create its Alps supercomputer in 2023.

Artist’s rendering of Alps, set to be the world’s most powerful AI-capable supercomputer, announced by The Swiss National Superomputing Centre (CSCS), Hewlett Packard Enterprise (HPE) and NVIDIA. (Image: NVIDIA)

Artist’s rendering of Alps, set to be the world’s most powerful AI-capable supercomputer,
announced by The Swiss National Computing Centre (CSCS), Hewlett Packard Enterprise (HPE) and NVIDIA. (Image: NVIDIA)

”NVIDIA’s novel Grace CPU allows us to converge AI technologies and classic supercomputing for solving some of the hardest problems in computational science,” said CSCS director Prof. Thomas Schulthess. “We are excited to make the new NVIDIA CPU available for our users in Switzerland and globally for processing and analyzing massive and complex scientific data sets.”

A Milestone for Arm in the Data Center?

Over the past decade, the notion of using Arm to transform data center energy efficiency has been a big vision with limited results. NVIDIA hopes to change that with its planned $40 billion acquisition of Arm, which as we noted last year “is likely to have broad impact on how the world’s IT users harness artificial intelligence.”  Arm CEO Simon Segars says the debut of Grace illustrates how “Arm drives innovation in incredible new ways every day.”

“NVIDIA’s introduction of the Grace data center CPU illustrates clearly how Arm’s licensing model enables an important invention, one that will further support the incredible work of AI researchers and scientists everywhere,” said Segars.

One aspect of Arm that has complicated its efforts in the server market is that it uses a different instruction set than Intel’s dominant x86 servers, and thus requires specific software. That means a bigger commitment from customers seeking to benefit from the better efficiency of the Arm architecture. At GTC, NVIDIA is announcing a series of partnerships to expand the support for trhe Arm architecture.

“Arm’s ecosystem of technology companies from around the world are ready to take Arm-based products into new markets like cloud, supercomputing, PC and autonomous systems,” said Huang. “With the new partnerships announced today, we’re taking important steps to expand the Arm ecosystem beyond mobile and embedded.”

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Artificial Intelligence, GPUs, HPC, NVIDIA

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.

About Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Mitigate Risk, Improve Performance and Decrease Operating Expenses through Data Center Self-Performance

Mitigate Risk, Improve Performance and Decrease Operating Expenses through Data Center Self-Performance If a vendor conducts the actual work in your data center, then you or your operator aren’t maximizing your current operating resources and are experiencing incremental cost and risk. Chad Giddings of BCS Data Center Operations, explains the importance of your data center provider having a high-degree of self-performance.

White Papers

disaster recovery

How Colocation Data Centers Support Business Continuity and Disaster Recovery

A strong disaster recovery (DR) plan is like a lifeboat that preserves a business’s key resources to ensure business continuity. Get this new e-book from Sabey Data Centers that explores how through partnering with colocation providers, businesses can gain essential support, including strategic geographic positioning, reliable and sustainable power infrastructure, expert remote hands services, flexible sizing as DR capacity and more. 

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Electrical Commissioning Engineer - Los Angeles, CA
  • Data Center Construction Project Manager - Ashburn, VA
  • Critical Power Energy Manager - Data Center Development - Dallas, TX
  • Data Center Facilities Operations VP - Seattle, WA
  • Senior Electrical Engineer - Data Center - Dallas, TX

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Endeavor Business Media© 2022