Google Unveils 48V Data Center Rack, Joins Open Compute

March 10, 2016
The hyperscale data center of the future will run on 48 volt DC power, according to Google, which unveiled the custom design powering its servers and joined the Open Compute Project to evangelize this vision to the world.

SAN JOSE, Calif – The hyperscale data center of the future will run on 48 volt DC power, according to Google, which unveiled the custom design powering its servers and joined the Open Compute Project to evangelize this vision to the world.

“To get the efficiency in cost and power, you have to feed 48 volt to the motherboard and convert only once,” said Urz Hölzle, VP of technical infrastructure at Google. “This is something we’ve deployed at scale. This isn’t an experimental system.”

Hoelzle said Google is already working with Facebook on a rack design that it will contribute to the Open Compute Project, and which it hopes will establish a standard for the use of 48V power in large data centers. “We want the whole industry to be able to use it,” said Hölzle.

The Open Compute Project now boasts an extraordinary alignment of the largest players in the hyperscale data center industry, with Google, Facebook and Microsoft all on board. The only major holdout is Amazon Web Services, which has made only limited disclosures about the hardware technology powering its cloud computing platform.

Targeting the Power Chain

Google has a long history of innovation in data center power distribution, including the use of an on-board battery for its servers, which eliminates a centralized UPS system. By simplifying the path of electricity from the power grid to the server, Goolge has eliminated steps that can waste power, including conversions from AC and DC and stepping down to lower voltages. In bringing higher voltage to the motherboard, Google is taking an additional step to streamline this process.

Hölzle said the move can reduce energy losses by as much as 30 percent from those seen with the use of traditional 12 volt power to the motherboard. He said Google sometimes deploys 12V servers in the 48V racks, but then has to add a Dc-to-DC conversion at the tray level.

A look at Google’s 48V server rack, and some data points on its benefits. (Image: Google)

It’s not a revolutionary idea, as 48V power has been widely used in the telecom industry for years. The data center industry has been discussing the benefits of bringing higher voltages to the rack for many years, but the active debate about voltage options, deployment challenges and safety issues has prevented any consensus around a new approach.

Google did what it has done for more than a decade – tested the concept in its own infrastructure, and then built it themselves and deploy it at scale.

A Big Vote for Open Compute

Google is also contributing a separate design for a shallower rack. While servers had standards for width and height, their depth has varied widely across different servers and vendors. Many Open Compute servers are deeper than those that Google uses.

“This is something we need,” said Hölzle. “We cannot currently deploy Open Compute racks in our data centers. It’s likely that server designs can be configured for both form factors.”

Hölzle rejects the idea that Google has been secretive with its in-house designs, noting that it has often shared designs through its data center efficiency summits and blog posts. He said that the decision to join Open Compute extends that commitment and creates additional scale for open hardware solutions.

“We think this will be something that saves users money,” said Hölzle.

We’ll have much more coverage from the Open Compute Summit in coming days, so be sure to follow our coverage on Twitter and our email newsletter.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Optimizing AI Infrastructure: The Critical Role of Liquid Cooling

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...

AI-Driven Data Centers: Revolutionizing Decarbonization Strategies

AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Bending the Energy Curve: Decoupling Digitalization Trends from Data Center Energy Growth

After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...

AI Reference Designs to Enable Adoption: A Collaboration Between Schneider Electric and NVIDIA

Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Courtesy of AFL
Source: Courtesy of AFL

Scaling Up and Scaling Out in AI Data Centers

Manja Thessin, Enterprise Market Manager for AFL, highlights the importance of industry collaboration across factors such as AI hardware innovation and modular infrastructure ...

White Papers

Dcf Sabey Wp Cover2022 06 15 11 50 21 300x233

How Austin’s Emerging Tech Hub Became the New Home for Data Centers

June 16, 2022
Austin, Texas has emerged as a tech mecca as startups and data centers move into the region. Sabey Data Centers explores why.