Hybrid Data Center Designs Offer Flexibility for Transition to Liquid Cooling
As powerful AI hardware is deployed at scale, more data center workloads will require liquid cooling to manage racks of high-density servers. But traditional cloud and enterprise equipment isn’t going anywhere, and air cooling will remain essential for many years to come.
For the next few years, data center designs are likely to focus on hybrid designs that support both liquid-cooled racks and air-cooled environments. The rise of AI places a premium on flexible design that accommodates a range of densities and workloads.
Those were the key points during a panel of cooling experts at the recent DCD Connect New York conference from DataCenterDynamics. These hybrid designs, they say, bring new considerations spanning everything from water piping to floor loading to staff training to service-level agreements.
If AI growth continues apace, increased demand for liquid cooling will serve to stratify the data center market, the panelists said, with some facilities built primarily for fully liquid-cooled AI installations. Some operators may resist liquid cooling altogether, but many will adopt a flexible hybrid approach that can support a transition to broader use of liquid cooling.
“Liquid cooling is here now, but air cooling is not going away,” said Imran Latif, Chief Operating Officer of the Scientific Data & Computing Center at Brookhaven National Laboratory. “There may be elevated upfront capex to ensure new data centers are capable of liquid cooling and have water infrastructure and piping in place. I think it's going to be a hybrid approach for the next few years.”
Considerations for Hybrid Data Center Designs
There are several forms of liquid cooling currently being deployed, including rear-door heat exchangers, chip-level cooling using a cold plate, and several types of immersion (single-phase or two-phase). Each requires slightly different infrastructure, but the primary consideration will be piping for water and adding coolant distribution units (CDUs).
There’s also the issue of whether facilities are using a raised-floor or slab. A raised floor provides the option of running water piping underneath the IT equipment. Slab designs have become more popular with the adoption of heavier racks and fan walls for cooling.
But a slab likely means deploying water piping overhead, which is undesirable for some customers. New builds could add channels into the slab to accommodate water pipes.
The DCD panelists said that the variety of approaches to liquid cooling presents some challenges.
“There are no clear standards now for liquid cooling,” said Danielle Rossi, Global Director for Mission Critical Cooling at Trane Technologies. ”How can you develop a standard when things are moving so fast? It's very tumultuous. For a little while we'll be basing this on ‘we've done this, we know that it works.’ A lot of these hybrid environments are going to be custom.”
There may also be a learning curve for customers, who will have to consider cooling components like supply water temperature as well as rack inlet temperatures.
“We will have additional criteria for SLAs (service level agreements),” said Mike Licitra, VP and Solutions Architect at Stream Data Centers. “(Cooling standards body) ASHRAE has defined different water classes, and SLAs should specify the class and expected entering water temperature and flow rates.”
Licitra said liquid cooling can also impact metrics such as Power Usage Effectiveness.
“A holistic metric for efficiency may become more important, as some hybrid models may shift some power and PUE responsibility from IT to facilities,” he said.
Liquid cooling technologies also require different maintenance protocols, especially immersion tanks that require techs to remove servers from dielectric fluid. That means staff may need to master server swapping and maintenance across different types of cooling.
“The skills are going to need to be spread across the IT staff,” said Latif. “They may not like this, and they may have to come out of their comfort zones.”
How Major Providers Are Approaching AI Design
Most colocation and data center providers will tell you that they’ve been supporting liquid-cooled environments for years and have the expertise to manage equipment in these environments. But these have often been custom installations in a portion of a suite or data hall.
That’s why leading providers are introducing new product offerings for liquid cooling. Digital Realty, Equinix and CyrusOne have all issued press releases in recent months outlining updated plans to support liquid cooling at scale.
So has Aligned Data Centers, which has been notable for its focus on cooling technology and the ability to support densities of up to 50kW a rack using air cooling. Aligned recently introduced DeltaFlow, described as an “extension of these capabilities” that can support up to 300kW per rack.
At DCD last week, the team from Iron Mountain Data Centers shared how their existing modular data centers provide an AI-ready option for enterprise customers interested in liquid cooling. Iron Mountain has been operating IT modules since it acquired IO Data Centers, a pioneer in modular designs, in 2017.
The modules offer 250kW to 300kW of capacity in containerized IT, which can support both air-cooled racks and high-density cabinets using a rear-door heat exchanger, with piping under the raised floor.
Iron Mountain isn’t alone in advancing modular designs for AI. George Slessman, who pioneered IT modules at IO, now heads DCX, which offers modular units optimized for AI workloads.