Report: Data Center Rack Density is Rising, and Heading Higher

Nov. 15, 2019
Data center rack densities are rising, and large enterprises expect that trend to continue, according to a new report from 451 Research, which suggests a growing opportunity for specialized cooling solutions, including liquid cooling.

Data center rack densities are rising, and large enterprises expect that trend to continue, according to a new report from 451 Research.

“One of the key things we’re seeing is a change in density,” said Kelly Morgan, VP of Datacenter Infrastructure & Services at 451 Research. “Density is finally rising. We’ve been hearing this for a long time, but until recently, the average has been 5 kW a rack. This is now perceived to be increasing.”

Forty five percent of companies said they expect average density of 11 kW per rack or higher over the next year, according to a 451 survey of 750 enterprise users. That’s a huge change from 2014, when just 18 percent of 451 respondents reported densities beyond 10kW.

What’s changed in the past five years? A major factor in rising densities is the rise of data-crunching for artificial intelligence. Powerful new hardware for AI workloads is packing more computing power into each piece of equipment, boosting the power density – the amount of electricity used by servers and storage in a rack or cabinet – and the accompanying heat.

The trend is illustrated by this week’s announcement from startup Groq of an AI-optimized Tensor Streaming Processor (TSP) architecture it says is capable of 1 PetaOp/s performance on a single chip implementation – equivalent to one quadrillion operations per second, or 1e15 ops/s.

The trend is challenging traditional practices in data center cooling, and prompting data center operators to adapt new strategies and designs, including liquid cooling.

End Users Report Higher Rack Density

High-density cooling and AI will be in the spotlight next week at the SC19 Conference in Denver for the supercomputing and high-performance computing (HPC) sectors. But interest in cooling extreme workloads is becoming a concern for the enterprise and hyperscale sectors as well, as more data centers begin to resemble HPC environments.

“New chips are impacting density,” said Morgan. “AI and new applications need a lot more energy per chip, and this has implications for the data center. People are expecting this to continue, and it’s not going to easily be able to handle.”

In a report titled The Infrastructure Imperative, 451 says that 54 percent of respondents reported having HPC infrastructure placing significant computational power in a small footprint, while just over 50 percent of respondents said their firms use hyperconverged infrastructure.

That aligns with recent Uptime Institute surveys finding that almost 70 percent of enterprise data center users report that their average rack density is rising.  The AFCOM State of the Data Center survey for 2019 also cited a trend towards denser racks, as 27 percent of data center users said they expected to deploy high performance computing (HPC) solutions, and another 39 percent anticipated using converged architectures that tend to be denser than traditional servers.

The 451 survey question about expectations for average rack densities reveals that densities appear to be moving above the 7 kW per cabinet cited as the average in the AFCOM data, and in some cases significantly higher.

This trend poses obvious challenges for the data center industry.

“Some enterprises will no longer be able to support the required densities in their on-premises datacenters, some datacenter providers will change their cooling systems to offer higher-density options, and cloud providers will have to change their build/operating strategies to deal with higher-density requirements,” Morgan notes.

The Liquid Cooling Opportunity

We’ve been tracking progress in rack density and liquid cooling adoption for years at Data Center Frontier as part of our focus on new technologies and how they may transform the data center. The reports of increased density from 451 are not a surprise. In our DCF 2019 forecast we suggested that “liquid cooling was finally ready for its closeup.”

The picture remains mixed, as end users report a steady increase in rack density, and there have been some large new installations for technical computing applications. Hyperscale operators, who are the largest potential market, continue to remain wary about wholesale adoption of liquid cooling.

Most servers are designed to use air cooling. A number of service providers have focused on air-cooled solutions optimized for high-density workloads, including Switch, Aligned Energy and ScaleMatrix. Others are housing gear in cabinets equipped with water-cooled chilling doors, including Colovore in Santa Clara and the LinkedIn facility at the STACK Infrastructure data center in Portland, Oregon.

Google’s decision to shift to liquid cooling with its latest hardware for artificial intelligence raised expectations that others might follow. Alibaba and other Chinese hyperscale companies have adopted liquid cooling, and Microsoft recently indicated that it has been experimenting with liquid cooling for its Azure cloud service. But Microsoft has decided to hold off for now, and Facebook has instead opted for a new approach to air cooling to operate in hotter climates.

A small group of HPC specialists offer water-cooled servers, including Asetek, CoolIT, Ebullient and Aquila Systems.  There’s also a group of vendors using various approaches to immersion, including  GRC (formerly Green Revolution).LiquidCool, Iceotope, Submer, Down Under GeoSolutions (DUG), Asperitas  and ZutaCore. Newcomer TMGcore has said it will unveil an immersion solution next week at SC19.

Servers being immersed in coolant fluid in a custom cooling enclosure invented by DownUnder GeoSolutions for high-performance data-crunching for the energy industry. (Photo: DownUnder GeoSolutions)

Some of these high-density specialists have built niches in the HPC sector, or in gaming and eSports. But for many years there has been an expectation that the data center industry would eventually shift to liquid cooling as new technologies demand more computing horsepower. Some approaches to liquid cooling offer extreme energy efficiency, the ability to concentrate hardware into a smaller footprint, and the potential to eliminate room-level cooling and some mechanical infrastructure. The tradeoff is that these solutions often require a larger up-front investment than air cooling, and a willingness to embrace new approaches.

The first time we encountered this thesis was at a 7×24 Exchange chapter meeting in 2002. In the ensuing years, the prospect of liquid cooling at scale has remained on the horizon, always a few years away. But hotter and more powerful hardware is finally beginning to move the needle on rack density, and the reason is artificial intelligence.

Hotter and more powerful hardware is finally beginning to move the needle on rack density, and the reason is artificial intelligence.

New Hardware Breaks the Mold

Intel continues to hold a dominant position in the enterprise computing space, but the development of powerful new hardware optimized for specific workloads has been a major trend in the high performance computing (HPC) sector, boosted by demand for data-crunching for artificial intelligence and other types of specialized workloads. We’ve seen the rise of NVIDIA GPUs in HPC and supercomputing, new energy for low-power ARM servers, and growing use of FPGAs and ASICs.

The biggest challenge for data center design may emerge from a cluster of hardware startups are preparing to bring specialized AI chips to market, some of which are extraordinarily powerful and employ new approaches and form factors.

A startup called Groq said this week that its new Tensor Streaming Processor (TSP) architecture is capable of 1 PetaOp/s performance on a single chip implementation – equivalent to one quadrillion operations per second. Groq was founded by engineers who helped Google create its AI hardware chip, the Tensor Processing Unit. The company says its architecture can support both traditional and new machine learning models, and is currently in operation on customer sites in both x86 and non-x86 systems.

Groq’s PetaOp-capable architecture was used to create the Tensor Streaming Processor shown on this PCIe board which is currently being tested by customers (Image: PRNewsfoto/Groq)

Today a hardware startup called NUVIA emerged from stealth mode with $53 million Series A funding from backers including Dell Technologies. NUVIA’s founders bring a rich silicon design heritage from engineering leadership roles at Apple, Google, ARM, Broadcom and AMD. The company promises to create “a new model for high-performance silicon design.”

Then there’s Cerebras Systems, which just revealed a chip that completely rethinks the form factor for data center computing. The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built, at nearly 9 inches in width, a design that pushes the existing boundaries of high-density computing – and required 15 kilowatts per chip to cool.

As AI continues to be integrated into a wide range of applications and services, enterprises will covet the increased power, but struggle to support these powerful new chips into their on-premises infrastructure. 451’s Morgan says 35 percent of enterprises now say that density is crucial in deciding where to place workloads.

“There’s an opportunity for service providers in this trend,” said Morgan. “There’s much more to come.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Runawayphill/Shutterstock.com
Source: Runawayphill/Shutterstock.com

How A Cross-Company Initiative Is Telling the Story of a New, Collaborative Future for Data Centers

A group of the industry’s top talent have been working as a special task force to address a pivotal issue facing the data center sphere at large. With their upcoming publishing...

White Papers

Get the full report

Decarbonized Resilience

Nov. 14, 2021
A new white paper from Enchanted Rock explores four alternatives to diesel backups to see which offer both resiliency and an economical way to meet climate goals.