• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Special Reports / Sustainability Meets High Density Data Center Cooling

Sustainability Meets High Density Data Center Cooling

By Julius Neudorfer - October 25, 2021

Sustainability Meets High Density Data Center Cooling

The Nautilus Data Technologies facility in Stockton, California. (Photo: Nautilus Data)

LinkedinTwitterFacebookSubscribe
Mail

Data centers have been seeing a steady increase in IT equipment power densities over the past 20 years. The past five years have brought a significant rise in power requirements of CPU, GPU, other processors and memory. Managing these heat loads presents a challenge for IT equipment manufacturers, as well as data center designers and operators. This launches our special report series on “Sustainably Meeting High Density Cooling Challenges: When, Where, and How.”

Cooling

Get the Full Report.

The power density for mainstream off-the-shelf 1U servers with multi-processors now typically ranges from 300 to 500 watts, and some models can reach 1,000 watts. When stacked 40 per cabinet, they can demand 12 to 20 kW. The same is true for racks loaded with multiple blade servers.

Cooling at this power density has already proven nearly impossible for older facilities, and is challenging to some data centers designed and built only five years ago. Even many newer data centers can only accommodate some cabinets at this density level through various workarounds, and have realized that this impacts their cooling energy efficiency.

The demand for more powerful computing for artificial intelligence (AI) and machine learning (ML) will continue to drive power and density levels higher. Processor manufacturers have product roadmaps for CPUs and GPUs expected to exceed 500 watts per processor in the next few years.

The world is trying to mitigate climate change by addressing core sustainability issues. For data centers, energy efficiency is an important element of sustainability; however, energy usage is not the only factor. Today, many data centers use a significant amount of water for cooling.

This paper will examine the issues and potential solutions to efficiently and sustainably support high-density cooling while reducing energy usage and minimizing or eliminating water consumption.

IT Equipment Thermal Management

What is thermal management and how is it different than cooling (free or otherwise)? While it may seem like semantics, there is an important difference between a design approach and a technical approach. Generally speaking, we have traditionally “cooled” the data center by means of so-called “mechanical” cooling. This process requires energy to drive a motor for the mechanical compressor, which drives the system (in reality, it is a “heat pump” since it transfers the heat from one side of the system to another). Getting the heat from the chip to the external heat rejection is the key to end-to-end thermal management effectiveness and energy efficiency.

Traditional mainstream data centers use air-cooled IT equipment (ITE). However, the power density of IT equipment has risen so significantly that it has become more difficult to effectively and efficiently cool IT equipment beyond 20 kW per cabinet using traditional perimeter cooling systems.

While improved airflow management, such as cold or hot aisle containment systems, has helped improve the effectiveness of the IT thermal management within the whitespace (technical space), it still requires a significant amount of fan energy for the facility cooling units and the ITE internal fans. There are also close-coupled cooling systems, such as rear-door heat exchangers and row-based cooling units, which can support higher power densities more effectively.

Air Cooling of Whitespace

Although IT equipment has continuously improved its overall energy efficiency (i.e., power consumed vs. performance), the total power draw has increased tremendously. This has resulted in a rise in average watts per square foot in the whitespace, going from under 100 watts per square foot to 200-300 W/Sf
or even higher for mainstream data centers being designed and built today. While this average power density can be cooled using conditional methods, such as raised floor with perimeter cooling units, it becomes a greater challenge every year.

The bigger challenge starts at the processor level and moves through the heat transfer process within the IT equipment and eventually impacts the rack power density.

The thermal design power (TDP) of processors (CPUs, GPUs, TPUs, and other upcoming devices) and many other devices such as memory have increased significantly over the past decade. Today even the CPUs using air-cooled heat-sinks in low profile commodity and mid-level servers can range 100-150 watts each, but most have difficulty moving up to the 200 W per processor level. This has resulted in significant increase of power density of the individual IT equipment, as well as the power density per rack resulting in an overall rise in watts per square foot in the whitespace.

As noted in the introduction, the power density for mainstream off-the-shelf 1U servers with multi- processors now typically ranges from 300 to 500 watts (some models can reach 1000 watts). When stacked 40 per cabinet, they can demand 12-20 kW. The same is true for racks loaded with multiple blade servers.

Understanding Airflow Physics

The nature of the challenge begins with the basic physics of using air as the medium of heat removal. The traditional cooling unit is designed to operate based on approximately 20°F differential of the air entering the unit and leaving the cooling unit (i.e., delta-t or ∆T). However, modern IT equipment has a highly variable delta-t dependent on its operating conditions as well as its computing load. This means that ∆T may vary from 10°F to 40°F during normal operations. This in itself creates airflow management issues resulting in hotspots for many data centers that are not designed to accommodate this wide range of varying temperature differentials. It also limits the power density per rack. There are solutions using various forms of containment that have been applied to try to minimize or mitigate this issue. Ideally, this is accomplished by providing closer coupling between the IT equipment and the cooling units.

The most common disconnect is not just the delta-t but its companion; the rate of airflow required per kilowatt of heat. This is expressed by the basic formula for airflow (BTU=CFM x 1.08 x ∆T °F), which in effect defines the inverse relationship between ∆T and required airflow for a given unit of heat. For example, it takes 158 CFM at a ∆T of 20°F to transfer one kilowatt of heat. Conversely, it takes twice that amount of airflow (316 CFM) at 10°F ∆T. This is considered a relatively low ∆T, which increases the overall facility fan energy required to cool the rack (increasing PUE). It also increases the IT internal energy (which increases the IT load without any computing work — thus artificially “improving” facility PUE). This also limits the power density per rack.

High Density Data Center

*Note: For purposes of these examples, we have simplified the issues related to dry-bulb vs. wet-bulb temperatures and latent vs. sensible cooling loads.

To overcome these issues ITE manufacturers of higher density servers, such as blade servers designed to operate at a higher ∆T whenever possible. This allows them to save IT fan energy, but it also can create higher return temperatures for the cooling units. For most cooling units (CRAH) fed from chilled water, this is not an issue; in fact, it is beneficial since it improves the heat transfer to the cooling coil for a given airflow. However, for other types of cooling units, such as a Direct Expansion (DX) Computer Room Air Conditioner (CRAC), which use internal refrigerant compressors, these higher return temperatures can become a problem and stress the compressor beyond the specified maximum return temperatures.

Download the full report, “Sustainably Meeting High Density Cooling Challenges: When, Where, and How,” courtesy of Nautilus Data Technologies to learn more about cooling high density data centers. In our next article, we’ll look at three more IT equipment heat removal challenges.

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Data Center Cooling, data center sustainability, high density data centers, Nautilus Data

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.
  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Understanding the Differences Between 5 Common Types of Data Centers

Understanding the Differences Between 5 Common Types of Data Centers No two are data centers are alike when it comes to design or the applications and data they support with their networking, compute and storage infrastructure. Shad Secrist of Belden outlines the differences between 5 of the most common types of data centers including edge, colocation and hyperscale.

White Papers

Bypass Architectures

Distributed and Centralized Bypass Architectures Compared

When designing a power protection scheme for data centers, IT and facility managers must ask themselves whether a distributed or centralized backup strategy makes more sense. Unfortunately, there is no easy answer to that question. Download the new white paper from Vertiv that explores the principle of centralized versus distributed bypass and applies it equally to standalone monolithic and integrated-modular UPS architectures.

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Critical Power Energy Manager - Data Center Development - Ashburn, VA
  • Site Development Manager - Data Center - Ashburn, VA
  • Data Center Facility Operations Director - Chicago, IL
  • Electrical Engineer - Senior - Dallas, TX
  • Mechanical Commissioning Engineer - Calgary, Alberta

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Data Center Frontier LLC © 2022