Best Practices for Deploying Liquid Cooling in Existing Data Centers
Last week we launched our article series on liquid cooling in data centers. This week we'll explore some common liquid cooling systems, as well as the logical progression for deploying liquid cooling solutions.
Let’s look at the most common options for liquid cooling.
Liquid-to-air cooling is commonly the first choice when a data center workload is approaching the limits of its air-to-air cooling In these systems, coolant liquid is circulated near heat sources, removing the heat from IT devices, then moving to a radiator which performs the liquid-to- air heat transfer.The liquid is then circulated back to the IT devices. This technology is most commonly represented by cold plates which sit directly on CPUs, GPUs and memory, but also applies to technologies like rear door heat exchangers which move heat from an entire rack to the external radiator system.
None of these solutions are exclusive and can be mixed and matched to meet specific cooling needs.
IS THERE A LOGICAL PROGRESSION FOR DEPLOYING LIQUID COOLING SOLUTIONS?
Most enterprises are going to be looking at adding liquid cooling solutions to their data centers as the need for the additional cooling capabilities becomes clear. So where can we start and what are the goals?
Most data centers find that they need to support a specific task that requires high densities and increased cooling capabilities. Right now, this is likely to be a cluster deployment for high performance computing or to support artificial intelligence and machine learning. While you can utilize many of the standard data center techniques (hot aisle containment, etc.) dropping a significant source of heat into an existing facility can bring a new set of problems. That is what makes this the perfect opportunity to begin to introduce liquid cooling solutions to your data center. While choices such as in-row cooling, rear door heat exchangers, direct-to-chip cooling, and immersion cooling are all available options, starting with the simplest solution, such as a passive rear door heat exchanger, can minimize the impact on your data center while allowing optimal performance of your high-density computing solution.
This variety of liquid cooling options makes it possible for liquid cooling to be deployed gradually, rather than as a rip and replace solution. It also allows for interim choices. You may choose to use the rear door heat exchanger for a single rack while you build out a more complete liquid-to-liquid cooling solution that will be available as the high-density deployment grows. Or you can save the complex deployment for your next generation data center and have a progressively increased level of cooling solutions available in your existing space, choosing to add RDHx systems, enclosure cooling solutions and liquid cooled cabinets that now give you a variety of solutions that can be matched to the demands of your IT workloads.
There is significant flexibility available with cooling choices that do not require reengineering your data halls or entire data center. Mixing and matching those solutions to the specific demands of the IT hardware can increase your efficiency in the data center while making more options available to meet those specific needs. An average server in your existing data center generates about 1.5kW of heat; according to Nvidia, a latest generation AI server using their GPUs can generate five or six times that much, so configuring your entire data center to support that level of cooling demand is unlikely to be efficient; finding the right solution to solve that point problem will be the short- term answer.
It is also important to note that not everything in the data center needs to be liquid cooled or even should be liquid cooled at this point in time. Devices like switches, routers, and network interface cards typically aren’t liquid cooled, as heat generation is rarely an issue. Storage devices are just beginning to see the availability of specific tools to keep them operating at lower temperatures, as heat can reduce MTTF, though they don’t see the same huge heat generation that a rack of AI GPUs can create, so adding specific cooling for storage is definitely a point solution that few will require. Other common data center equipment such power distribution units, backup batteries and the various other pieces of electronics found in the data center don’t often require additional cooling, though if you chose to, using liquid cooled enclosures would enable you to cool any rack mounted equipment you would choose to install in such an enclosure. These devices are rarely the point of failure due to an increase in heat in the data center.
Download the full report, Liquid Cooling is in Your Future, featuring nVent, to learn more. In our next article, we'll share tips for evaluating your environment for the move to liquid cooling.