What the Cloud Revolution Tells Us About Data Center Power Challenges

March 15, 2024
The data center industry is under fire for its energy use amid surging demand for AI capacity. There are lessons available from the sector’s response to similar concerns about cloud computing.

"Data centers have become the front lines in our society’s digital transition, creating tension with the limitations of the power grid."

I wrote that in January as the central theme of DCF's "8 Trends" forecast for 2024. Several months later, the data center power crunch is squarely in the national headlines. 

The growing attention to U.S. power constraints is ramping up pressure on the data center industry, which is already facing scrutiny in many communities.

So what happens now? How will the tensions between Internet growth and existing infrastructure play out?

There are many uncertainties about the road ahead. But there are also lessons from previous inflection points in digital infrastructure that may help navigate the current resource-constrained landscape.

The emergence of cloud computing in 2008-12 prompted scary headlines about data center energy use and predictions that its growth was unsustainable. The solution was the rise of the cloud data center, a design optimized for energy efficiency at massive scale. The hyperscale revolution enabled extraordinary gains, stabilizing data center energy use in a period of dynamic growth.   

One prediction: If the past 25 years tells us anything, it is that the Internet is virtually unstoppable. Demand for digital services routes around most roadblocks, much like the network itself.

 

History Rhymes

The data center industry has been discussing its power challenges for several years, with the shortages in Virginia’s Data Center Alley putting an exclamation point on these concerns. The launch of ChatGPT in November 2022 kicked off an additional wave of demand for generative AI services.

In recent months, awareness of grid capacity problems has prompted deeply critical coverage of the data center industry, as seen in the headlines over the past two weeks. Some examples:   

These headlines are tough, but not entirely new. History shows that data centers have routinely been characterized as “energy hogs” by media and environmental groups, especially during the early years of cloud computing growth in 2008-12.

The most notable example was “The Cloud Factories,” a 2012 series by the New York Times, which spent more than a year investigating the data center industry and its energy practices.

"Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show," the Times wrote, adding that “data centers can waste 90 percent or more of the electricity they pull off the grid."

“It’s just not sustainable,” one industry consultant told The Times. “They’re going to hit a brick wall.”

In reality, the industry was already several years into a design overhaul that revolutionized how data centers were built and operated. A pivotal shift occurred in 2007-2009, when data center providers began sharing best practices in industry forums and conferences, a major change from the secrecy that once surrounded data center operations. This period also saw the founding of The Green Grid, an industry consortium focused on energy efficiency.

Hyperscale designs created a virtuous cycle in several ways:

  • Operators like Google, Microsoft and Facebook/Meta invested heavily in innovation to reduce their power usage and environmental impact, optimizing everything from chips to data halls. The broad adoption of the Power Usage Efficiency (PUE) metric helped companies track their improvement.
  • IT workloads began a massive migration out of legacy on-premise data centers, moving to more efficient purpose-built colocation facilities and cloud computing platforms. Meanwhile, the delivery of music, video and government services shifted to digital, concentrating business processes in heavily-optimized high-tech facilities.
  • As a result, data centers dramatically improved their energy efficiency, resulting in a small increase in industry electricity use during a period of explosive growth for cloud computing and online services.

These remarkable gains were document by U.S. government data in 2016 and reaffirmed in a 2020 paper by the journal Science, which found that electricity usage by global data centers grew just 6% from 2010-18 (from 194 TWh to 205 Twh). During that time, the installed base of physical servers rose 30%, data center IP traffic rose 11-fold, and installed storage capacity rose 26-fold.

 

What Does This Mean in 2024?

How do these lessons apply to the rise of generative AI? There are similar opportunities, but also some important differences.

The most likely path forward is a “now and later” strategy in which digital infrastructure undergoes a new round of energy optimization, while moving toward a future with less reliance on public utilities and the power grid.    

This two-phase approach includes:

  • A new round of facility-level energy optimization, featuring an accelerated transition to liquid cooling, and new server and system designs customized for specific workloads, especially AI.
  • The gradual adoption of on-site energy generation at scale (eventually including nuclear) as well as microgrids and energy storage.
  • We are already seeing data center construction shifting into new markets that have power available, including places that have not previously seen large campus developments, like Mississippi (AWS) and Indiana (Google and Meta).

This once again creates an opportunity for data center developers and operators to optimize their operations and make more power available to directly support IT loads.

The good news is that hyperscalers are really good at this. Meanwhile, developers have raised their efficiency game to serve these huge customers, and investors are eager to invest in digital infrastructure.  

The bad news is that after years of energy optimization, further efficiency gains will be harder to come by. In the 2010-15 period, the biggest improvements came from innovation in cooling. That is likely to also hold true in the AI tsunami, with liquid cooling playing a central role.

Full disclosure: I’ve said this before, most notably in a DCK story in 2014 when I wrote that a meaningful shift to liquid cooling was “likely inevitable” within the next 3 to 5 years. Ten years later, it’s still not here but seems much, much closer.

The company nudging the industry toward liquid cooling is NVIDIA, whose GPU chips are powering the lion’s share of AI workloads. Each of its upcoming B100 GPU cards will require 1,000 watts – yes, 1 kilowatt – and CEO Jensen Huang has hinted that the next version of the NCIDIA DGX server will require liquid cooling.

Industry collaboration has been crucial to gains in energy efficiency, advancing standardization that improved economics and accelerated adoption. Liquid cooling appears poised to follow a similar path with last week’s launch of the Liquid Cooling Coalition, Members include Intel, Vertiv, Supermicro, Submer, and Shell. The group has made sustainability a key facet of its mission.

Intel and the Open Compute Project also have working groups to advance liquid cooling.

 

The Next Phase for Power

Refinements in data center design and liquid cooling can help the industry make the most of available power. But what about the unavailable power? Finding enough power capacity to keep pace with AI demand is a harder problem.

We’ve written lots  about power constraints and the data center space crunch. Data centers rely on utilities to serve their customers, and in many markets these utilities are squeezed by transmission constraints.

On-site generation is a hard problem, but would provide hyperscalers greater control over the pace of deployments. There’s also a wave of startups seeking to address the looming energy gap. An example is this week’s launch of Verrus, which plans to build data centers with flexible energy management using microgrids and energy storage.    

The coming shift to on-site generation is “the third wave of data center power,” according to industry veteran Dan Golding of Appleby Strategy Group.

In a recent analysis, Golding says the data center industry will shift to self-generation by 2035. It’s a long timeline because much of the progress relies upon innovations that don’t exist yet or are not ready for broad adoption, such as small modular nuclear reactors (SMRs).                

“We need to develop several missing technologies, especially hydrogen conversion, and we must move decisively to lower the cost of SMRs by cutting regulation and red tape as well as ruthlessly standardizing SMR designs rather than allowing a diversity of products,” Golding writes. “Data center providers, developers, and operators play a key part in this - we must be the drivers and financial sponsors of the foundational technologies to effectuate this change.”

But it won’t happen overnight. In the meantime, building new transmission lines takes time, as does production of large-capacity transformers. There will be dislocations in supply and probably pricing along the way, and the sharp critiques of data centers are likely to continue.

But some are wondering aloud whether other stakeholders will face similar scrutiny.  

“A big question,” former Microsoft energy executive Brian Janous told the New York Times, “is how much outside pressure utilities and state regulators will face to do things differently.”

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Getty Images, courtesy of Schneider Electric
Source: Getty Images, courtesy of Schneider Electric

Minimizing Cyber Risk and Doing it in a Simplified Way – the Key to Secure IT Infrastructure Success

Kevin Brown, SVP EcoStruxure Solutions, Secure Power for Schneider Electric, outlines two major areas of focus when it comes to cybersecurity.

White Papers

Get the Full Report

Using Simulation to Validate Cooling Design

April 21, 2022
Kao Data’s UK data center is designed to sustainably support high performance computing and intensive artificial intelligence. Future Facilities explores how CFD can validated...