Liquid Cooling – A Better Solution for Data Center Cooling

Nov. 2, 2016
Your data center needs a better way to cool off. Now, liquid cooling technologies can take the heat off of your data center and introduce next generation levels of optimization.

New data center optimization technologies are changing the way administrators deploy cooling and control platforms. Liquid cooling isn’t just a system to optimize data center cooling and server performance—it’s a way to revolutionize data center economics in general. These are tested, proven systems that help a data center support some of the world’s most advanced use-cases.

Although there will still be a place for traditional, airflow-based, cooling methodologies, many organizations are now looking into liquid cooling for their servers and overall data center environments. Most of all, high density computing, like those used for high performance computing (HPC) and big data, require a new way to remain efficient. In this special report from Ebullient, we learn about new use-cases around liquid cooling and how this technology impacts modern data center deployments.

Download the Data Center Frontier Special Report on Liquid Cooling

In the World of Liquid Cooling – Why “Engineered Fluid” is a Better Option Than “Water”

Whenever the concept of liquid cooling is brought up, too often engineers immediately assume that this means water. However, in the world of modern workloads, HPC, big data, and high-density computing, using water within your cooling architecture may not be the best option.

In working with precision cooling systems, engineers are able to deliver a next-generation direct liquid cooling technology that eliminates conventional liquid cooling system risks and further improves efficiency. Rather than warming a fluid or even using traditional water, the Ebullient system absorbs heat by vaporizing a dielectric, non-toxic, non-flammable engineered fluid from 3M within sealed modules mounted directly to the hottest server devices (typically the processors and co-processors). From there, you can cool powerful systems used for HPC, big data processing, and even GPU-based workloads. This includes cooling systems using the compute power of the NVIDIA Tesla K80 Graphics Cards.

A Data Center Cooling Study

A recent study determined power consumption, water consumption, and facility design all benefit from deploying a two-phase Ebullient DirectJet cooling system in an Open Compute data center. To obtain field data, Ebullient instrumented Winterfell class Open Compute servers and measured baseline cooling energy consumption and IT load in the stock, air-cooled configuration. Ebullient then installed its two-phase cooling system on the servers and again measured cooling energy consumption and IT load. Based on the study results, Ebullient recommends a hybrid data center cooling strategy featuring the Ebullient cooling system for primary IT heat load matched to an adiabatic cooler.

From there, if needed, deploying CRAHs within the data center for residual heat load. In a few very warm and humid ambients, traditional precision cooling can be used for the residual load, or even non-precision building HVAC components, such as roof-top units.

To put some numbers to it, in a standard open compute data center operating 18,000 Winterfell-class servers, a hybrid Ebullient-CRAC cooling strategy would:

  • Reduce cooling energy consumption by 80%.
  • Reduce overall annual power consumption by 10M kWh
  • Operate with a Power Utilization Effectiveness (PUE) of 1.10.

Importantly, Ebullient cooling systems enable Open Compute data centers to enjoy significantly lower data center build-out costs by decoupling cooling system and structure costs. Ebullient deploys its cooling systems through modular, standard, rack-sized pump stands connected to the building’s secondary liquid loop. Smaller facilities can realize the same performance, efficiency and space benefits with linearly scaled deployment costs.

So, how does this ultimately compare to more traditional cooling systems? In the study, based on a model developed by Iyengar and Schmidt, a completely air-cooled system required 35 CRACs and a water loop operating at 1.43 MW of cooling power to move an IT thermal load of 2.92 MW  (PUE of 1.49). With an Ebullient cooling system in place to offload 70% of the total server thermal load in the form of two-phase cooling, the number of CRACs can be reduced to 10 (from 35) thus only requiring 288 kW to move the same 2.92 MW of IT load (total PUE of 1.10). This represents an approximately 80% cooling power reduction (1.43 MW to 288 kW).

Download this special report today to learn more about engineered fluid and where this can impact your data center ecosystem. Remember, liquid cooling has emerged as a powerful technology to introduce greater levels of efficiency and lower the PUE of a data center. Now, find out how this all works; and where it can impact your data center cooling strategy.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Andrius Kaziliunas/Shutterstock.com
Source: Andrius Kaziliunas/Shutterstock.com

Cabling Systems: On the Forefront of the Low Voltage Telecom Revolution

Jose Reyes, Vice President & Co-Owner of Cabling Systems INC, explores the history of low voltage telecom cabling systems.

White Papers

Download the full report.

PCIe® 6.0: Testing for a New Generation

Aug. 1, 2021
This white paper from Anritsu outlines the enhanced PCIe 6.0 technologies, such as PAM4, Forward Error Correction (FEC) and link equalization. It also provides guidelines on selecting...