The Pathway to Liquid Cooling

Oct. 9, 2024
Jason Matteson, Global Director of Product Management at nVent Data Solutions, provides insights on successfully making the transition to liquid coooing.

Liquid cooling is the talk of the data center industry, especially as hyperscale cloud service providers (CSPs) ready themselves to support more and more AI applications. Since the chips used to power AI require liquid cooling, IT operators and data center managers around the world are assessing how to adopt this technology.

While CSPs planning hyperscale data centers may have been preparing for the shift to liquid for years, original equipment manufacturer (OEM) and original design manufacturer (ODM) servers have mostly relied on more traditional cooling methods. These traditional cooling methods consisted of air-cooled server solutions with conventional heat sinks to cool their CPUs and GPUs inside. Now, however, AI chips will not even power up without liquid cooling, so CSPs are required to race toward the adoption of liquid cooling solutions.

Types of Cooling

To successfully transition to liquid cooling, it is important to first understand that not all types of liquid cooling technologies work in the same way. Below are three of the most common liquid cooling configurations:

  • Liquid-to-air cooling brings liquid cooling to IT equipment in air-cooled data centers by circulating liquid through racks of IT. The liquid is then cooled using a liquid-to-air heat rejection unit and recirculated. Because the liquid is in a closed loop, this method does not require additional facility infrastructure.
  • Air-to-liquid cooling relies on air to cool the IT equipment itself but uses chilled liquid to cool air before it is released back into the room. This isolates server heat to the rack and reduces the need to overcool air in an entire data room. In these systems, facility liquid is required in order to cool liquid loops inside rear door coolers placed on the backs of server racks.
  • Liquid-to-liquid cooling efficiently removes heat at its source by utilizing a cold plate placed directly on the chip or completely immersing chips in dielectric fluid, then cooling that liquid with facility liquid. Careful design is absolutely critical since liquid is in such close proximity to IT systems.

At the center of any liquid cooling system is the coolant distribution unit (CDU). Depending on the cooling approach, CDUs can either be standalone pieces of equipment or integrated into other equipment. CDUs serve as the heart and the brain of any liquid cooling system, using advanced control algorithms to precisely pump cooled liquid through the cooling system at an optimized rate and temperature.

The best liquid cooling technology depends entirely on the context of a specific data center or server being specified for use. For users that need to rapidly scale cooling capabilities to support AI chips but do not have time for an extended retrofit or remodel of their facilities, liquid-to-air cooling can be a great choice. For data centers that are looking to improve power usage effectiveness and have invested in liquid facility infrastructure but are still running air-cooled chips, rear door air-to-liquid is a great option. As data centers and chip manufacturers prepare to deploy high-performance computing IT, they will likely find themselves turning to liquid-to-liquid cooling. There are too many variables to say that one kind of liquid cooling is always the best—the most important consideration is making sure that the cooling solution that is brought to the solution is commensurate with the overall objective.

Making the Leap to Liquid

Making the change to liquid cooling may sound complicated, but nobody needs to navigate this transition alone. Choosing the right partner to help with the transition is critical. It’s important to choose a cooling partner that has the ability to offer several important capabilities.

Design and Installation Support

Many people are nervous about putting liquid cooling into data centers. It is an investment to add liquid cooling capabilities, and putting liquid next to IT can cause some anxiety. For this reason, it’s critical to find a partner that can show up on day zero, look at the unique needs for the application, look at potential designs and propose a comprehensive solution. Experts on liquid cooling systems can advise on topics like pipework, water quality considerations and power management in addition to the obvious decision of choosing the right CDU or cooling approach. This end-to-end service is critical.

Scalability and Flexibility

The demands of today will not be the demands of tomorrow. Choosing modular data center solutions that allow data centers to scale operations as demand increases is essential. If data centers build to the power usage capacities required today, they will quickly find themselves conducting rework and further developments as technology changes. Data center managers must preserve the ability to add additional racks and cooling infrastructure equipment within their whitespace to be able to scale with rapidly increasing demand.

Cooling technology has the potential to be a large driver of modularity. Cooling systems need to have the ability to be adaptable to support integration of new as well as decommissioning of older equipment. As cooling technology continues to improve, data centers also can drive greater density of IT. This means that cabling and power distribution must also be designed with a scalable architecture in mind.

Manufacturing Assurance

Data center operators need partners who not only can bring the right designs and expertise to the table, but also successfully and reliably manufacture them at scale. This means relying on partners who promise a high level of quality assurance through both design and extensive testing, including processes such as helium leak and heat load testing. Manufacturers that can provide fully certified and approved solutions with comprehensive CDU testing, factory acceptance testing and mission critical testing for tier 1 data centers should be the standard.

Additionally, quality assurance must be met with regard to facilities and rack manifolds to ensure no foreign debris or contamination would ever be present from manufacturing or shipping.

Data center managers exploring liquid cooling solutions are more likely to find these benefits with partners who have real-world experience deploying a significant number of solutions. While there are exciting new technologies available, being able to make a single prototype and manufacture at scale are different challenges.
About the Author

Jason Matteson

Jason Matteson is the Global Director of Product Management at nVent in Data Solutions. He has worked in the data center industry for 28 years. Prior to joining nVent, Jason worked at IBM, Lenovo, Vertiv and most recently Iceotope Technologies. Jason contributes to several industry consortiums, including ASHRAE and OCP, has over 100 U.S. Patents, authored or co-authored several intellectual-property publications, and has given several presentations about industry power and cooling trends at various technical symposiums and conferences. Jason received a Bachelor of Science degree in Mechanical Engineering from Rochester Institute of Technology in 1997.

Courtesy of ZincFive
Image courtesy of ZincFive

The Emergence of Immediate Power Solutions (IPS) for Data Centers

Tod Higinbotham, COO of ZincFive, explains why it’s essential data center operators choose the best battery for each application instead of taking a one-size-fits-all approach...

White Papers

DCF_KohlerWPCover_2021-12-08_8-46-39

Data Center Generator Maintenance

Dec. 10, 2021
A new white paper from Kohler Power Systems explains the feasibility and benefits of no-load exercising for diesel generator operators.