Cooling Edge Data Centers: It Requires More Thought

Nov. 18, 2020
The team at TAS Energy highlights a variety of different approaches to cooling for edge data centers, from direct expansion cooling systems, to chilled water solutions, free cooling and more.

Jon Benson, Director of Technology and Solutions;  Jack Kolar, VP MC/MUS Sales;  and Abhishek Banerjee, Senior Project Application Engineer,  from TAS Energy  highlight a variety of different approaches to cooling for edge data centers, from direct expansion cooling systems, to chilled water solutions, free cooling and more. 

Jon Benson, Director of Technology and Solutions, TAS (left), Jack Kolar, VP MC/MUS Sales, TAS (top right), and Abhishek Banerjee, Senior Project Application Engineer, TAS

The Setting

For many years, the data center industry visionaries predicted that 2020 would bring us the ultimate data world: 5G everywhere, 8K video streaming, billions of interconnected devices in a massive Internet of Things, self-driving vehicles, and more.

The promise of this ultimate data world drove these same visionaries to predict the need for tens of thousands of small edge data centers installed at the edge of the internet – mainly at the foot of every cell tower and on many street corners.

This predicted demand triggered a rush to bring edge data center products into the market. This drove the decision-making process to focus on form factor and speed to market.

This focus led to these questions: “how do we fit the most equipment into the smallest space possible?” and “how do we get this product to market quickly?”

In contrast, the decision-making process used for very large colocation and hyperscale data centers considers many elements down the fine details. These elements include providing flexibility for a wide array of power density requirements, planning a resilient power distribution solution, devising an efficient and effective cooling solution, building a robust and safe building, delivering the operation and maintenance of the data center systems, and more.

In edge data centers, these elements are often relegated to the simplest solution or the smallest form factor. The element often given the least consideration is the cooling solution.

There may be five or ten or more different cooling solutions considered for a colocation or hyperscale data center, where only one is considered for an edge data center. This is often an afterthought, which is why in most cases, ‘attachment’ type solutions are sought after, as opposed to ‘integrated’ type solutions.

DX, The ‘Common’ Solution

The most common cooling solution for edge data centers today is the “direct expansion” (DX) cooling systems. This cooling solution comes in forms factors from a high-end CRAC unit down to a simple home air conditioning unit. This solution has several advantages: a lower capital cost, it works in almost any climate, it uses zero water, and it can be serviced and maintained by almost any provider found online. This solution also has several disadvantages: has the lowest operating efficiency (PUE = 1.25-1.50+), provides limited flexibility for future load growth (a 10 kW unit will always be a 10 kW unit), and occupies a lot of space.

The use of these distributed cooling solutions provides a far better means for incrementally growing the cooling system to meet the scaling capacity from initial deployment to full deployment of the ITE and beyond and to leverage these smaller increments to provide additional resiliency in the cooling system.

As the industry continues to redefine edge data centers – now spanning a spectrum from 10 kW to 10,000 kW+ — the efficiency, effectiveness, resiliency, and flexibility of the cooling solution take on greater importance.

This increased importance raises new questions:

  • Will the edge data center be staffed or “lights out”?
  • Which cooling solutions can be supported with qualified service and maintenance at the location(s)?
  • What level of flexibility for average and concentrated power density is required?
  • What capability for power load growth is considered? What is the expected increase over a period? Is that time defined?
  • What operational efficiency is required?

This elevated importance means more in-depth thought regarding edge data center cooling solutions and these questions will point us in the right direction.

Chilled Water Distributed Solution

When the average power density cabinet increases above 10+ kW or there is a cabinet with a concentrated power density, the edge data center may be better served by a distributed refrigerant or a distributed chilled water-cooling solution. These distributed solutions provide far more flexibility in meeting larger and more diverse power loading requirements in an edge data center. These solutions also provide the option for utilizing water to dramatically improve efficiency (PUE = 1.10-1.20).

While the distributed systems are slightly more capital cost-intensive, they do provide a far higher asset resiliency; this resiliency is borne out in the ability to easily adapt to changing needs without replacing an entire cooling system. In a distributed system, a specific 10 kW unit can be easily augmented to become a 15 kW or 20 kW unit without complete replacement of the unit.

The use of these distributed cooling solutions provides a far better means for incrementally growing the cooling system to meet the scaling capacity from initial deployment to full deployment of the ITE and beyond and to leverage these smaller increments to provide additional resiliency in the cooling system.

‘Free’ Cooling?:

A cooling solution that has become more popular in the larger data center is the use of “free” cooling. This solution utilizes once-through ambient air to directly cool the ITE. This solution is often paired with direct water-based evaporative cooling when the ambient air is a bit too warm For a purpose-built edge data center located in a cool or arid climate, this may be a solution that is beneficial. For a fleet of smaller edge data centers, this cooling solution will be far less practical due to the wide array of climates and the available service in many localities.

It is important to understand that “free” cooling is a lower energy option, not a zero energy option. Additionally, “free” cooling will introduce new maintenance and air filtration issues to consider; mainly is the location near plants or trees that release pollen, flowers, or fibers into the air or where dust or pollution will be in the air. These will factor into the “efficiency”.

High-Density Liquid Cooling

There are other considerations for the cooling solution – those ultra-high-density computing solutions that are here now and promise to become more prevalent in future computing applications – those in 2020 and beyond. There will be a need for edge data centers capable of cooling a “cabinet” housing 40 kW or 80 kW or 200 kW of ITE – they already exist.

These ultra-high-density computing solutions will require advanced cooling technologies. Technologies such as “direct-to-chip” cooling – one where refrigerant, water, or a non-conductive liquid, is piped directly into the server chassis – or a full immersion system having servers loaded into a bath of non-conductive liquid. These are already deployed in edge-style data centers and have the additional benefit of delivering the ultimate in efficiency (PUE=1.05 or lower).

The global pandemic of 2020 brought us a rapid shift to “at home” everything (work, school, shopping, movies/video/music, and even virtual happy hours). This “at home” life reset will demand more data content closer to home – and that will provide growth for edge data centers. There was an overnight shift in the data load from commercial systems installed in offices, to the residential systems at home. To accommodate these sudden changes, it is imperative to pay attention not just to the provision of lower latency, but also higher reliability of these systems. The cooling solutions for edge data centers require the thoughtful consideration required for this very important cog to make our new “at home” everything life seamless.

Jon Benson is the Director of Technology and Solutions, Jack Kolar, is VP MC/MUS Sales, and Abhishek Banerjee is the Senior Project Application Engineer at  TAS.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Courtesy of ebm-papst Inc
Source: Courtesy of ebm-papst Inc

Importance of Resonance Detection and Avoidance in HVAC Systems

Joe Landrette, Senior Director of Ventilation/Air Conditioning, Data Center Markets, and Digital Solutions for ebm-papst Inc., explores the challenge of resonance in centrifugal...

White Papers

DCF media kit 2022

Data Center Frontier Media Kit

Oct. 16, 2021
Data Center Frontier is ideal for companies that want to be seen as a thought leader in the data center industry. The programs include opportunities to build awareness, submit...