Executive Roundtable: Cooling Imperatives for Managing High-Density AI Workloads
For the second installment of our Executive Roundtable for the First Quarter of 2025, we asked our panel of seasoned industry experts for their perspective on the nuances of new cooling technologies and efficiency strategies being deployed to maintain operational stability and sustainability for managing high-density artificial intelligence (AI).
The rapid rise of AI workloads has significantly impacted data center operations, particularly in terms of power consumption and thermal management. AI applications, especially those involving large-scale training models, demand substantial computational power, leading to increased energy usage and heat generation. Projections indicate that AI workloads could account for 15% to 20% of total data center energy consumption by 2028, presenting challenges for existing power and cooling infrastructures.
And while viable for a certain embedded range of workloads, traditional air-cooling methods are simply inadequate for managing the heat produced by high-density AI servers, especially as rack power densities exceed 40 kilowatts (kW). To address these challenges, data centers are adopting advanced cooling technologies (read: liquid) and efficiency strategies to maintain operational stability and sustainability.
Given this context, with AI-driven workloads pushing power densities beyond traditional limits, we asked our industry expert roundtable for their opinions what new cooling technologies and efficiency strategies they see most effectively being deployed to maintain data center operational stability and sustainability?
The seasoned data center industry leaders of our Executive Roundtable for the First Quarter of 2025 include:
- Danielle Rossi, Data Center Strategic Sales Leader, Trane
- John Pasta, Executive Vice President - Data Center Solutions, JLL, Inc.
- Michael Lahoud, Co-Managing Partner, Stream Data Centers
- Ryan Baumann, Vice President of Sales, Power Solutions for the Americas, Rehlko
And now, onto the second DCF Executive Roundtable question for Q1 of 2025.
Data Center Frontier: With the rapid rise of AI-driven workloads pushing power densities beyond traditional limits, what new cooling technologies and efficiency strategies are being deployed to maintain operational stability and sustainability?
Danielle Rossi, Trane: With the increased densities required by AI comes the increased requirement of liquid cooling and its associated design demands.
We have seen the influx of liquid cooling manufacturers and technologies in the market over the last few years, but most have been relatively siloed.
There are many different types of liquid cooling technologies, and each can have different requirements for heat rejection.
Moving forward, the water system of a data center (closed- or open-loop) needs to be designed holistically. Performing at peak efficiency is challenging without the water system being designed and optimized as one system from roof to rack.
This may include multiple loop and heat rejection considerations and requires expert mechanical design customized for each installation.
Earlier, cohesive vendor engagements in the full system design should become a more prevalent strategy to best approach these deployments.
John Pasta, JLL: The rise of high-density AI workloads is pushing data center cooling systems to their limits. Traditional air-based cooling is insufficient beyond 40-50 kW, prompting a widespread adoption of liquid cooling technologies.
Solutions like direct-to-chip cooling, where coolant is delivered directly to the processors, and rear door heat exchangers, which use liquid to remove heat at the rack level, are becoming standard.
Liquid cooling also offers significant energy efficiency benefits. Compared to air cooling, it requires less power to achieve the same temperature control, reducing overall electricity consumption.
This aligns with sustainability goals by lowering the carbon footprint of cooling operations and enabling innovative practices like heat reuse—where captured heat is repurposed for nearby heating needs, such as office spaces or district heating systems.
Michael Lahoud, Stream Data Centers: For the past two years, Stream Data Centers has been developing a modular, configurable air and liquid cooling system that can handle the highest densities in both mediums. Based on our collaboration with customers, we see a future that still requires both cooling mediums, but with the flexibility to deploy either type as the IT stack destined for that space demands. With this necessity as a backdrop, we saw a need to develop a scalable mix-and-match front-end thermal solution that gives us the ability to late bind the equipment we need to meet our customers’ changing cooling needs.
It’s well understood that liquid far outperforms air in its ability to transport heat, but further to this, with the right IT configuration, cooling fluid temperatures can also be raised, and this affords operators the ability to use economization for a greater number of hours a year. These key properties can help reduce the energy needed for the mechanical part of a data center’s operations substantially.
It should also be noted that as servers are redesigned for liquid cooling and the onboard server fans get removed or reduced in quantity, more of the critical power delivered to the server is being used for compute. This means that liquid cooling also drives an improvement in overall compute productivity despite not being noted in facility PUE metrics.
Counter to air cooling, liquid cooling certainly has some added management challenges related to fluid cleanliness, concurrent maintainability and resiliency/redundancy, but once those are accounted for, the clusters become stable, efficient and more sustainable with improved overall productivity.
Ryan Baumann, Rehlko: AI workloads put a huge strain on power infrastructure, making backup power essential for data centers to avoid downtime during grid disruptions or peak demand periods.
Backup generators are a go-to solution because they’re dependable and built to handle high-power loads when it matters most. Diesel-powered systems, in particular, deliver fast, high-density power, ensuring data centers stay up and running—even under the intense demands of AI applications.
To boost generator efficiency while also moving toward cleaner energy solutions, many data centers are adopting smarter maintenance strategies. We introduced our Conscious Care program to cut fuel use and lower costs while keeping operations running smoothly.
By letting operators run emergency generators at no load and extending the load interval to every four months, the program helps reduce fuel consumption, air and noise pollution, greenhouse gas emissions, and overall energy expenses.
By implementing more efficient maintenance strategies, data centers can strengthen their power reliability and operational stability while making real progress toward sustainability and efficiency.
Next: Data Center Site Selection and Market Evolution in a Constrained Environment
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.