• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Cooling / Report: Data Center Rack Density is Rising, and Heading Higher

Report: Data Center Rack Density is Rising, and Heading Higher

By Rich Miller - November 15, 2019 Leave a Comment

Report: Data Center Rack Density is Rising, and Heading Higher

A high-density server installation. (Photo: Rich Miller)

LinkedinTwitterFacebookSubscribe
Mail

Data center rack densities are rising, and large enterprises expect that trend to continue, according to a new report from 451 Research.

“One of the key things we’re seeing is a change in density,” said Kelly Morgan, VP of Datacenter Infrastructure & Services at 451 Research. “Density is finally rising. We’ve been hearing this for a long time, but until recently, the average has been 5 kW a rack. This is now perceived to be increasing.”

Forty five percent of companies said they expect average density of 11 kW per rack or higher over the next year, according to a 451 survey of 750 enterprise users. That’s a huge change from 2014, when just 18 percent of 451 respondents reported densities beyond 10kW.

What’s changed in the past five years? A major factor in rising densities is the rise of data-crunching for artificial intelligence. Powerful new hardware for AI workloads is packing more computing power into each piece of equipment, boosting the power density – the amount of electricity used by servers and storage in a rack or cabinet – and the accompanying heat.

The trend is illustrated by this week’s announcement from startup Groq of an AI-optimized Tensor Streaming Processor (TSP) architecture it says is capable of 1 PetaOp/s performance on a single chip implementation – equivalent to one quadrillion operations per second, or 1e15 ops/s.

The trend is challenging traditional practices in data center cooling, and prompting data center operators to adapt new strategies and designs, including liquid cooling.

End Users Report Higher Rack Density

High-density cooling and AI will be in the spotlight next week at the SC19 Conference in Denver for the supercomputing and high-performance computing (HPC) sectors. But interest in cooling extreme workloads is becoming a concern for the enterprise and hyperscale sectors as well, as more data centers begin to resemble HPC environments.

“New chips are impacting density,” said Morgan. “AI and new applications need a lot more energy per chip, and this has implications for the data center. People are expecting this to continue, and it’s not going to easily be able to handle.”

In a report titled The Infrastructure Imperative, 451 says that 54 percent of respondents reported having HPC infrastructure placing significant computational power in a small footprint, while just over 50 percent of respondents said their firms use hyperconverged infrastructure.

That aligns with recent Uptime Institute surveys finding that almost 70 percent of enterprise data center users report that their average rack density is rising.  The AFCOM State of the Data Center survey for 2019 also cited a trend towards denser racks, as 27 percent of data center users said they expected to deploy high performance computing (HPC) solutions, and another 39 percent anticipated using converged architectures that tend to be denser than traditional servers.

The 451 survey question about expectations for average rack densities reveals that densities appear to be moving above the 7 kW per cabinet cited as the average in the AFCOM data, and in some cases significantly higher.

This trend poses obvious challenges for the data center industry.

“Some enterprises will no longer be able to support the required densities in their on-premises datacenters, some datacenter providers will change their cooling systems to offer higher-density options, and cloud providers will have to change their build/operating strategies to deal with higher-density requirements,” Morgan notes.

The Liquid Cooling Opportunity

We’ve been tracking progress in rack density and liquid cooling adoption for years at Data Center Frontier as part of our focus on new technologies and how they may transform the data center. The reports of increased density from 451 are not a surprise. In our DCF 2019 forecast we suggested that “liquid cooling was finally ready for its closeup.”

The picture remains mixed, as end users report a steady increase in rack density, and there have been some large new installations for technical computing applications. Hyperscale operators, who are the largest potential market, continue to remain wary about wholesale adoption of liquid cooling.

Most servers are designed to use air cooling. A number of service providers have focused on air-cooled solutions optimized for high-density workloads, including Switch, Aligned Energy and ScaleMatrix. Others are housing gear in cabinets equipped with water-cooled chilling doors, including Colovore in Santa Clara and the LinkedIn facility at the STACK Infrastructure data center in Portland, Oregon.

Google’s decision to shift to liquid cooling with its latest hardware for artificial intelligence raised expectations that others might follow. Alibaba and other Chinese hyperscale companies have adopted liquid cooling, and Microsoft recently indicated that it has been experimenting with liquid cooling for its Azure cloud service. But Microsoft has decided to hold off for now, and Facebook has instead opted for a new approach to air cooling to operate in hotter climates.

A small group of HPC specialists offer water-cooled servers, including Asetek, CoolIT, Ebullient and Aquila Systems.  There’s also a group of vendors using various approaches to immersion, including  GRC (formerly Green Revolution).LiquidCool, Iceotope, Submer, Down Under GeoSolutions (DUG), Asperitas  and ZutaCore. Newcomer TMGcore has said it will unveil an immersion solution next week at SC19.

Servers being immersed in coolant fluid in a cusrtom cooling enclosure invented by DownUnder GeoSolutions for high-performance data-crunching for the energy industry. (Photo: DownUnder GeoSolutions)

Servers being immersed in coolant fluid in a custom cooling enclosure invented by DownUnder GeoSolutions for high-performance data-crunching for the energy industry. (Photo: DownUnder GeoSolutions)

Some of these high-density specialists have built niches in the HPC sector, or in gaming and eSports. But for many years there has been an expectation that the data center industry would eventually shift to liquid cooling as new technologies demand more computing horsepower. Some approaches to liquid cooling offer extreme energy efficiency, the ability to concentrate hardware into a smaller footprint, and the potential to eliminate room-level cooling and some mechanical infrastructure. The tradeoff is that these solutions often require a larger up-front investment than air cooling, and a willingness to embrace new approaches.

The first time we encountered this thesis was at a 7×24 Exchange chapter meeting in 2002. In the ensuing years, the prospect of liquid cooling at scale has remained on the horizon, always a few years away. But hotter and more powerful hardware is finally beginning to move the needle on rack density, and the reason is artificial intelligence.

Hotter and more powerful hardware is finally beginning to move the needle on rack density, and the reason is artificial intelligence.

New Hardware Breaks the Mold

Intel continues to hold a dominant position in the enterprise computing space, but the development of powerful new hardware optimized for specific workloads has been a major trend in the high performance computing (HPC) sector, boosted by demand for data-crunching for artificial intelligence and other types of specialized workloads. We’ve seen the rise of NVIDIA GPUs in HPC and supercomputing, new energy for low-power ARM servers, and growing use of FPGAs and ASICs.

The biggest challenge for data center design may emerge from a cluster of hardware startups are preparing to bring specialized AI chips to market, some of which are extraordinarily powerful and employ new approaches and form factors.

A startup called Groq said this week that its new Tensor Streaming Processor (TSP) architecture is capable of 1 PetaOp/s performance on a single chip implementation – equivalent to one quadrillion operations per second. Groq was founded by engineers who helped Google create its AI hardware chip, the Tensor Processing Unit. The company says its architecture can support both traditional and new machine learning models, and is currently in operation on customer sites in both x86 and non-x86 systems.

Groq’s PetaOp-capable architecture was used to create the Tensor Streaming Processor shown on this PCIe board which is currently being tested by customers (Image: PRNewsfoto/Groq)

Today a hardware startup called NUVIA emerged from stealth mode with $53 million Series A funding from backers including Dell Technologies. NUVIA’s founders bring a rich silicon design heritage from engineering leadership roles at Apple, Google, ARM, Broadcom and AMD. The company promises to create “a new model for high-performance silicon design.”

Then there’s Cerebras Systems, which just revealed a chip that completely rethinks the form factor for data center computing. The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built, at nearly 9 inches in width, a design that pushes the existing boundaries of high-density computing – and required 15 kilowatts per chip to cool.

As AI continues to be integrated into a wide range of applications and services, enterprises will covet the increased power, but struggle to support these powerful new chips into their on-premises infrastructure. 451’s Morgan says 35 percent of enterprises now say that density is crucial in deciding where to place workloads.

“There’s an opportunity for service providers in this trend,” said Morgan. “There’s much more to come.”

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Artificial Intelligence, high density data centers, Liquid Cooling

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.

About Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Digitization Driving Increasing Requirements for Enterprise IT

Digitization Driving Increasing Requirements for Enterprise IT Sean Baillie, Executive Vice President, Connectivity at QTS Data Centers explains how digital transformation is putting more pressure on data center providers to provide a flawless experience for a variety of end users in a constantly scaling environment.

DCF Spotlight

The COVID-19 Crisis and the Data Center Industry

The COVID-19 pandemic presents strategic challenges for the data center and cloud computing sectors. Data Center Frontier provides a one-stop resource for the latest news and analysis for decision-makers navigating this complex new landscape.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

White Papers

data center cooling

Is Your Data Center Partner Optimizing Its Cooling Strategy for Efficiency and Uptime?

When you are selecting a data center partner, ask about the cooling approaches they use. The answers can provide insights into the data center’s ability to meet your expectations and your company’s sustainability goals. This new white paper from Trane explores the ins and outs of data center cooling, and how to ensure the most efficient solution. 

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Peter Kazella and Associates, Inc

  • NETA Technician - Ephrata, WA
  • Regional Data Center Facilities Manager - Denver, CO
  • Project Manager - Colo Data Center Construction -Contract - Toronto, ON
  • Director of Data Center Construction - San Jose, CA
  • Construction Project Manager - Data Center - Columbus, OH

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • White Paper

Copyright Data Center Frontier LLC © 2021