Cooling for High[ER] Density Spaces Should be a Design Feature, Not an Afterthought

June 2, 2021
Higher density demands a specialized cooling strategy, yet many data center operators cool the entire room rather than the equipment inside. Doug Ausdemore of Data Aire explains how higher density data centers benefit from a specialized cooling strategy. 

In this edition of Voices of the Industry, Doug Ausdemore, Senior Product Development Manager, Data Aire explains how higher density data centers benefit from a specialized cooling strategy.

Doug Ausdemore, Senior Product Development Manager, Data Aire

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency.

So, how do you invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future.

Density is Growing: Low to Medium to High[er] and Highest

Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack.

High[er] Density Requires Dedicated Cooling Strategies

For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future.  This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Overprovisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted.

No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date.

Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge.

Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity.

For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems.

A better approach to developing or upgrading a data center is to build cooling plans into the design of the data center from the beginning with a holistic approach that minimizes hot spots. Alternating “hot” and “cold” aisles should be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling.

Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed.

The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

Doug Ausdemore is Senior Product Development Manager, Data Aire of Data Aire. Contact them to learn more about designing a cooling system for your higher density environment.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Optimizing AI Infrastructure: The Critical Role of Liquid Cooling

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...

AI-Driven Data Centers: Revolutionizing Decarbonization Strategies

AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Bending the Energy Curve: Decoupling Digitalization Trends from Data Center Energy Growth

After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...

AI Reference Designs to Enable Adoption: A Collaboration Between Schneider Electric and NVIDIA

Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Courtesy of Stream Data Centers
Image courtesy of Stream Data Centers

The Rise of the “Fake” Data Center Developer — And How to Tell the Difference

Stream Data Centers’ Co-Managing Partners expand on the problem of “fake” data center developers and explain how investors and end users can separate the wheat from the chaff....

White Papers

Dcf Sabey Wp Cover2022 06 15 11 50 21 300x233

How Austin’s Emerging Tech Hub Became the New Home for Data Centers

June 16, 2022
Austin, Texas has emerged as a tech mecca as startups and data centers move into the region. Sabey Data Centers explores why.