Executive Roundtable: Strategies for Cooling Higher Rack Densities

June 21, 2021
In our Data Center Executive Roundtable, seven leading experts examine the latest trends in rack power density and how they impact cooling. Featuring execs from Aligned, EdgeConneX, Nortek, Iron Mountain, Infrastructure Masons, Vertiv and Data Aire.

Welcome to our 23rd Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. Our Second Quarter 2021 roundtable offers insights on four topics: rack density trends and cooling strategies, the outlook for data center staffing, enterprise data center demand and options, and the state of edge computing.

Here’s a look at our distinguished panel:

The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Each day this week we will present a Q&A with these executives on one of our key topics. We begin with a look at our panel’s take on trends in data center rack density and cooling.

Data Center Frontier: What is the current state of data center rack density, and what lies ahead for cooling as more users put artificial intelligence to work in their applications?

KEVIN FACINELLI, Nortek Data Center Cooling

Kevin Facinelli, Nortek: Rack density is undoubtedly increasing due to artificial intelligence and other emerging applications. However, it’s difficult to predict how much cooling future heat loads will require. The best strategy to employ now is future-proofing a data center facility and ensuring there’s flexibility to adapt the cooling infrastructure to any anticipated heat loads.

Water-based cooling has a brighter future than air cooling because its a more energy-efficient and effective heat transfer method. Furthermore, chilled water systems are probably best known for supplying fan coil walls, but they can also bring cooling directly to a rack via rear door heat exchangers and cold plate chip cooling. Today, many applications don’t require cooling at the rack, but it’s a huge advantage to have the water-based cooling infrastructure already in place to inexpensively bring chilled water directly to expanded racks.

One strategy for increased rack density heat loads is the cooling distribution unit (CDU). The capital cost of CDUs discourages some operators to use them, but now there are alternatives. Instead, rear door heat exchangers, cold plate chip cooling and other less expensive methods are more cost-effective alternatives. That’s where future-proofing pays dividends. Instead of expanding a traditional cooling infrastructure that wasn’t designed for scalability, there are cutting-edge cooling plants available today that are flexible and designed for scalability. That’s the best strategy for approaching future rack density heat loads.

JOHN HEWITT, Vertiv

John  Hewitt, Vertiv: We’re seeing continued growth in rack power densities above 30 kW due to increased adoption of AI, and this is driving strong demand for liquid cooling technologies. We’re well positioned to meet this demand as a result of the years of R&D organizations like ours have already invested in liquid cooling and the application of liquid cooling in niche markets such as HPC.

As it moves into the mainstream, liquid cooling solutions, and the design and commissioning processes that support them, are maturing quickly, particularly around the integration of liquid into air-cooled data centers, which is the fastest growing segment of the market. Most enterprises will likely turn to colocation to support their high-density racks and we are seeing providers adding liquid cooling infrastructure to some suites to meet evolving customer requirements.

ANDREW SCHAAP, Aligned

Andrew Schaap, Aligned: The growth of Artificial Intelligence (AI), machine learning and other compute-intensive workloads are driving higher densities in racks. According to Uptime Institute, the mean average density was 8.4kW per rack; however, many of today’s AI applications and high-performance computing (HPC) environments can draw 30kW to 50kW per rack or more.

Traditional cooling methods won’t be able to scale under these processing demands of these applications, and they are less likely to meet the requirements of today’s efficiency-conscious customers. The key to overcoming rising rack densities is rethinking legacy design and cooling methodologies. This includes doing away with configuration at static densities, removing heat from its source versus simply making outside air cold and blowing it into the data center, and making infrastructure more adaptive to that it dynamically adapts to IT loads without the need to reconfigure the entire environment or strand capacity.

As AI continues to proliferate and becomes widely used in cooling, I think we’ll continue to see not only gains in energy efficiency and data center sustainability, but also in automation and speed-to-market.

MICHAEL DeVITO, Iron Mountain

Michael DeVito, Iron Mountain: There is no single answer anymore to the density question. For retail customers we see that the average density levels haven’t changed a lot over the last couple of years. Three kilowatts to 5 kilowatts per rack is most common.

The large cloud providers are more driven to optimize processing power per rack. This is obviously reflected in the average density. Today 8 kilowatts to 12 kilowatts is what we are seeing, but cloud providers continue to push the density limits. Expectations are that the density of hyperscale deployments will continue to increase.

The growing use of AI applications means higher requirements for processing power and thus for higher densities. The so-called supercomputers which today are used to process real complex algorithms already require densities which can go up to 25 kilowatts per rack and even higher.

We see liquid cooling solutions as a potential solution to deal with these types of densities in the future but we also feel that part of the solution in this area should come from the IT-hardware designers. By optimizing the power and processor cycles and directly cooling processors on the board, further optimization should be possible.

ERIC JENSEN, Data Aire

Eric Jensen, Data Aire: For years, the threat of high rack densities loomed, yet each passing year saw the same 2-4kW per rack average. That’s now nudging up.

While specific sectors like Federal Agencies, higher education, and enterprise R&D are certainly into high performance computing with 20, 80, or even 100kW per rack, the reality today remains one of high-ER density in the realm of 8-12kW per rack (see Uptime Institute’s global survey 2020).

Cooling higher densities doesn’t mean over-building at the risk of stranded capacity for parts of the year. The answer is load matching via software that can respond accordingly and the infrastructure hardware to support it.

PHILLIP MARANGELLA, EdgeConneX

Phillip Marangella, EdgeConneX: Rack density is affected by many factors, from the increasing use of GPU-based servers to an increased attention to the overall footprint of a data center and how that space can impact per-rack power numbers. What we know, however, is that rack density is consistently increasing and it’s one part of a larger effort by data center providers like EdgeConneX, where the topic of sustainability includes power generation, PUE, water usage, biodiversity at data center sites, emissions, and more.

Density is improving year over year, but power efficiency is, in some ways, more important. Power-efficient servers can mean more servers in a rack that requires less electricity to drive more processing power. We see more deployments taking place at the edge where the volume and velocity of data processing is frequently at its highest levels.

KATIE O’HARA, Stream Data Centers and iMasons.

Katie O’Hara, Infrastructure Masons and Stream Data Centers: We expect AI/machine learning to affect rack density in several ways. For end-users deploying AI applications (which are typically deployed on GPUs) we expect higher rack densities.

As our clients continue to deploy machine learning, we expect to see workload, operating envelope and rack density evolve to adapt to changes in technology (e.g. Direct Liquid Cooling to the rack) or to take advantage of favorable energy costs.

NEXT: The outlook for data center staffing. 

Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with DCF on LinkedIn, and signing up for our weekly newspaper using the form below:

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Anggalih Prasetya/Shutterstock.com
Source: Anggalih Prasetya/Shutterstock.com

AI in the Data Center: Building Partnerships for Success

Wesco’s Alan Farrimond explains how the right partnerships can help data centers overcome the barriers to growth and meet the demands of AI.

White Papers

Chatsworth Cover 2023 08 07 11 57 53

The Data Center Innovation Will Change the Way You Think About Liquid Cooling

Aug. 7, 2023
The demand for high density servers and high-performance computing continues to grow – as does the amount of heat generated by all this computing power. Data center operators ...