New Liquid Cooling Designs Refine Options for Data Centers

Sept. 9, 2021
New requirements around efficiency and using space wisely mean that leaders in the technology space need to look at cooling differently. A new Data Center Frontier special report, courtesy of TMGcore, looks at how liquid cooling is driving the evolution of the data center industry.

Last week in our special report series, we discussed how liquid cooling adoption has evolved over time. This week, we’re looking at new liquid cooling designs, standards, and liquid cooling systems that are helping mitigate risks and alleviate challenges around data center cooling.

Download the full report.

The evolution of the workloads that we use every day to stay productive has fundamentally changed. New requirements around efficiency and using space wisely mean that leaders in the technology space need to look at cooling differently. Due to higher densities, specific heat capacities, and thermal conductivities, liquids are generally much better cooling agents than, for example, gases. Compared to air, water has a heat-carrying capacity 3,500 times higher than air and a thermal conductivity 24 times greater. This makes liquid cooling far more efficient than air cooling in the right use-cases. Specifically when the density of the environment starts to grow.
When working with liquid cooling, it’s important to note that there are different types to consider. Generally speaking, there are two different types of data center liquid cooling solutions:

Indirect Liquid Cooling (cold plate, chilled plates). Heat sinks and chill and cold plates are frequently used in high-power electrical systems. Both these systems provide a flat, cooled surface for the integration of electrical equipment. Heat is transferred from the integrated component to the cooled surface by conduction. Heat is then transferred conductively through the heat sink/ chill plate to the cooling medium.

Direct Liquid Cooling (immersion). An immersed liquid-cooled system puts the heat-producing components into direct contact with a cooling liquid. Heat travels directly to the liquid through convection. Heat can be removed from the coolant in several ways. The coolant can be pumped to a heat exchanger, then cooled and recirculated. Alternatively, heat can transfer passively from the liquid to the enclosure, with the enclosure cooling by natural convection to the surrounding air; this is how power transformers on utility power lines are cooled.

Compared to air, water has a heat-carrying capacity 3,500 times higher than air and a thermal conductivity 24 times greater. This makes liquid cooling far more efficient than air cooling in the right use-cases.

Beyond these two high-level definitions of liquid cooling, we dive into a bit more detail. The following chart covers the components of liquid cooling based on the two definitions above and their applications in the data center.

Data center cooling classification

Looking further, the chart below examines the various data center cooling methodologies and their corresponding use-cases. Interestingly, some of the most advanced development around liquid cooling revolves around immersion cooling mechanisms. New designs produce all-on-one systems as integrated components to support massive amounts of density, alongside critical data center components like network, storage, compute, and power.

New Designs for Integrated Liquid Cooling Solutions

For example, new solutions now deliver unparalleled densities for high-performance computing (HPC) applications. The platform provides customers with a fully integrated, nearly turnkey 600kW, 100 OIU capable two-phase liquid immersion cooling data center platform that can be deployed in virtually any environment.

(Source: TMGcore)

Similarly, new integrated liquid cooling solutions provide the same degree of highly efficient, two-phase liquid immersion-cooled data center capacity with a smaller footprint for applications outside the core. These types of platforms support 120kW of power and 20 OIU capacity.
Finally, for smaller footprints or edge ecosystems, you can deploy specifically integrated liquid cooling solutions for applications that benefit from compute power on the front lines, close to facilities, events, or incidents. These dense, smaller platforms are purpose-built to serve as the ideal ecosystem for these applications.

(Source: TMGcore)

Remember, these technologies come built-in with the core components required to run a liquid cooling ecosystem. Again, network, storage, compute, and power can all integrate into this system. So, referencing the chart above, you’re not using component-based liquid cooling that may require a data center overhaul. Instead, you can drop these all-in-one systems into almost any type of data center ecosystem and have a self-contained liquid cooling platform.
With this in mind, and as you go on your liquid cooling journey, there are two key points to keep in mind, according to the special report.

  1. Vendor considerations. In the past, liquid cooling was a bit more of a puzzle piece configuration. That is, you would install only what you needed, retrofit parts of your data center, and operate independently with a given vendor. Today, liquid cooling solutions take an integrated approach where your entire use case can be encompassed in a single liquid-cooled compute pod. As exciting as that might seem, this is also an excellent opportunity to get to know your vendors and what they can do for your data center. Although these new types of integrated pods are far easier to deploy into a data center than ever before, it’s still important to validate and challenge your vendors. More so, running pilot programs and small-scale proofs-of-concept will help you learn more about these systems and how they fit into your data center.
  2. Serviceability. Ensuring your systems are up and running is a persistently challenging task. Now, throw in some liquids into the standard data center maintenance equation, and it can get more complicated. In liquid cooling, you would often need to find partners that can service critical components of your liquid-cooled ecosystem. So, you would have one vendor examine why a liquid-cooled component isn’t working properly, while an entirely different partner might need to examine why the rest of the server is failing. This division of duties can be cumbersome and, in some cases, risky. The difference with fully integrated systems is that you can now leverage one partner to support your liquid cooling needs. So, it would be best to look for a full scope of infrastructure monitoring and management services for a fully integrated liquid cooling solution that incorporates power, servers, networking, and storage. These services can include ensuring that all aspects of the integrated liquid-cooled platform are operating at optimal efficiency. Similarly, a partner can leverage data-driven metrics to assist with the routine maintenance of the platform proactively and even monitor the customer’s operational workloads. From there, the partner is capable of ensuring that any issues are detected, escalated, and resolved.

The liquid cooling myth busting section

In the past, specific components of the data center ecosystem were complicated to cool with liquids. Some of the biggest challenges in deploying liquid cooling revolved around spinning media and switching technologies. This was the main reason why component-based liquid cooling was beneficial. That is, you could cool only the parts of a server or infrastructure that you needed. The challenge is that you’d need to retrofit parts and pieces of your data center to support this. In examining immersion cooling, we continue to see similar challenges around hard drives and networking equipment. Consider the chart below.

By utilizing atmospheric systems, TMGcore alleviates the need for heavy-duty seals, reduces the chance of leakage, and greatly reduces the complexity of the overall design.

To better understand liquid cooling and specific challenges, let’s discuss industry concerns and how new liquid cooling technologies are helping mitigate risks and alleviate challenges. The following list is a collection of the top 10 concerns and how they are being overcome with new and modern liquid cooling solutions.

  • Cable wicking. Like those from OTTO, new designs can deliver a cableless backplane to help drive down the number of cables in the tank. Cableless techniques help keep contaminants out of the fluid and allow for robotic hot-swapping of the server Several commercially available options withstand being in the tank/fluid with little to no issues for the cables that need to go into the tank. To remove any contaminants extracted from the cables, liquid cooling solutions have a redundant filtration system that ensures the fluid remains clean. As a long-term strategy, liquid cooling leaders will be working directly with server OEMs to ensure that proper cabling is used for the 2 phase immersion environment.
  • Pressure regulation. Systems like those from TMGCore are atmospheric. Utilizing a bellows and vapor recovery system, they can regulate the pressure in the tank, never reaching above 1psi. By utilizing atmospheric systems, TMGcore alleviates the need for heavy-duty seals, reduces the chance of leakage, and greatly reduces the complexity of the overall design. Currently, TMGCore is the only one globally that operates a pressurized 2 phase immersion tank in commercial production and at scale. In fact, TMGcore is the only company that has ever operated a pressurized 2 phase system in production. The atmospheric or pressure-controlled architecture ensures that pressure regulation is constantly ensured in the ecosystem.
  • Liquid cooling is not really used in enterprise or traditional data centers. For the time being, that is correct. However, new, fully integrated liquid cooling systems rely on standard things found in a data center, including power, connectivity, and, depending on the unit, a water loop. In traditional data center deployments, engineers can use the return water instead of having a chilled water loop, reducing PUE. New types of liquid cooling systems are engineered to need similar hookups to traditional infrastructure found in data centers while providing better benefits. Finally, utilizing robotics systems like OTTO can make it as easy as possible to go from air to liquid. One of the significant hurdles for these systems is ergonomics. The robotic system eliminates human error along with weight issues.
  • Water treatment. New integrated liquid cooling solutions leverage coils that do not use any special water treatment package that isn’t already used in the data center cooling loop.
  • Combustible oils. If this is a concern, work with liquid cooling solutions that do not use any oils— systems like OTTO leverage fluids used in fire suppression systems. The fluorochemical fluids which are being used in 2PLIC are non-flammable and non-combustible. Finally, it’s always recommended to examine the Safety Data Sheets for each fluid you plan on using.
  • Toxicity. Fluorochemical fluids for immersion cooling are to be safe in their intended uses with wide margins of safety. The margin of safety is assessed by comparing anticipated exposure to recommended exposure guidelines, such as the eight-hour time-weighted average (TWA) exposure guideline. Leaders in liquid cooling will partner with fluid suppliers like 3M, which is known for its integrity and ethics. According to 3M’s Safety Data Sheet, the test results demonstrate that Novec Engineered Fluids and Fluorinert Electronic Liquids are relatively low in acute toxicity; thus, they are not classified as hazardous under the Global Harmonized System of Classification and Labeling of Chemicals (GHS). All 3M fluids have undergone rigorous toxicity testing in accordance with global standards and have been accepted for use in targeted applications by the U.S. EPA and other international regulatory bodies. The exposure guidelines for 3M fluids are included in the product Safety Data Sheet (SDS).

    (Source: TMGcore)

  • Material compatibility. There has been a lot of advancement when it comes to liquid cooling and compatibility. For example, TMGcore has developed a deep understanding of what can and cannot go into fluids over the last four years. This experience has allowed liquid cooling leaders to produce products that normally work in the fluid. An example of this is the HydroPowerBlade. This server design is one of the keys to the OTTO power delivery system’s high density and high efficiency. Instead of relying on traditional distribution and rectification technologies, TMGcore has developed multiple versions of its HydroPowerBlade offering, allowing for the delivery of over 120,000 total watts (60,000 N+N redundant) to each HydroRack. The input to each OTTO unit is provided by a pair of high current conductors, making for direct facility connections. From there, the primary power is distributed inside the HydroPod and to the input of each HydroPowerBlade and converted and rectified from AC to high-quality 54V DC. The HydroPowerBlades are cooled by the immersion cooling fluid, allowing us to take advantage of the high-efficiency 2PLIC process to reject the heat from the HydroPowerBlade itself. The DC power is then delivered to each HydroBlade, where it is either used natively by the server at 54V or further converted to traditional ATX voltages. The entirety of the HydroPowerBlade is monitored and managed by the integrated software management system, providing the user with a single pane of glass visibility to the entirety of their unit’s power system. As we know, material compatibility will be one of the driving factors of immersion success. Look for partners with a lab that provides a wide range of services to ensure products meet the demands of the immersion environment.
  • Servicing, maintenance, and complexity of running air and liquid-cooled systems. This is where we look at next-generation liquid cooling solutions. Servicing the servers inside OTTO units, for example, is incredibly easy due to the robotics. This eliminates most ergonomic issues created by immersion cooling. For two-phase systems, a fluid filter needs to be changed periodically to keep fluid integrity. And, as mentioned earlier, changing a server can be done with a robot and a traditional server cart. Finally, you can work with integrated liquid cooling solutions that include standard air-cooled cabinets, sometimes called dry racks, in all of its systems. This allows for seamless incorporation of lower powered traditionally cooled equipment and provides simple connections to traditional air racks. If deployed in a traditional data center, platforms like OTTO can utilize the same power feeds and connectivity feeds and is attached to the same water loop for traditional air cooling. Or, they can take advantage of higher voltage, higher current power feeds and minimizes their internal power distribution architecture.
  • Air cooling is simply understood better. Interestingly, liquid cooling is utilized for other everyday uses cases most people don’t know or forget about—for example, the radiator on your car or space suits. Large-scale computing is actually one of the last energy-intensive operations that are not cooled by liquids. Everything from nuclear reactors, car engines, paper mills, and more are already cooled with liquids.
  • Risk with liquids being so close to the cabinets. There is a perception of risk with liquids in the data center where you have liquid in the cabinet that may cause an outage. However, the fluids being used are dielectric, meaning it does not conduct electricity. Therefore, this liquid is safe for technology components. Also, the fluids being used in Two-Phase do not leave residues.
  • Large-scale computing is actually one of the last energy-intensive operations that are not cooled by liquids. Everything from nuclear reactors, car engines, paper mills, and more are already cooled with liquids.

    Finally, one of the most common concerns regarding immersion cooling is cost. Interestingly, with new and modern solutions, this has become a misnomer. Two-Phase Immersion Cooling seems complex and expensive. However, given the expanding amount of use-cases, this type of immersion cooling is far easier to deploy and is less costly. For example, TMGcore has systems that can be deployed in the 2-3 dollar per watt range, including fluid and compute. The fluid cost can seem like a significant CapEx. However, the fluid is warrantied by the supplier, and partners like TMGcore provide a fluid loss guarantee on their systems and offer warranty options on the servers. To make sure you reduce both cost and complexity, be sure to work with immersion cooling partners that provide this level of service and guarantee.

    As we discussed in the previous article, greater levels of density directly translate to increased savings when using liquid cooling. Compared to the traditional data center at 10 kW/rack, a 2x compaction (20 kW/rack) results in a first cost savings of 10%. When 4x compaction is assumed (40 kW/rack), savings go up to 14%.

    As liquid cooling systems continue to play a role in emerging roles in new and innovative use-cases, it’s important to note that these integrated solutions have come a long way. Not only are they leveraging all-in-one design concepts, but they’re also using robotics to automate server and component maintenance further while operating in a liquid-cooled ecosystem.

    Download the full report, “The State of Data Center Cooling: A Key Point in Industry Evolution and Liquid Cooling” courtesy of TMGcore to learn how new data center and business requirements are shaping digital infrastructure. In our next article, we’ll look at how liquid cooling can be integrated in data centers and at the edge. Catch up on the previous articles here and here.

    About the Author

    Bill Kleyman

    Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

    Sponsored Recommendations

    Tackling Utility Project Challenges with Fiberglass Conduit Elbows

    Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

    How Deep Does Electrical Conduit Need to Be Buried?

    In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

    Understanding Fiberglass Conduit: A Comprehensive Guide

    RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

    NECA Manual of Labor Rates Chart

    See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

    Siwakorn1933/Shutterstock.com
    Source: Siwakorn1933/Shutterstock.com

    Vendor Diversification vs Vendor Consolidation: What’s the Best Way to Ensure Supply Chain Resilience for Your Data Center Construction Projects?

    Joey Wagner, Program Management Subject Matter Expert for Blueprint Supply Chain, outlines the benefits of two supply chain vendor strategies and explores how each can impact ...

    White Papers

    Get the full report

    Achieving Energy Efficiency Goals in Data Centers

    April 15, 2022
    One of the challenges global data centers currently face is the need to meet the increased processing and storage needs of their customers while also making their operations more...