CoolIT and Accelsius Push Data Center Liquid Cooling Limits Amid Soaring Rack Densities

April 11, 2025
As racks climb toward 600 kW, vendors are delivering thermal innovations to match. Meanwhile, Dell’Oro’s revised forecast sees data center cooling and power distribution growing 14% annually through 2029.

The data center industry’s thermal baseline is shifting—fast. With NVIDIA’s next-gen racks targeting 600kW and AI workloads straining traditional infrastructure, cooling innovation has become a front-line imperative. That urgency is now being met with record-setting performance from two select liquid cooling leaders: Accelsius and CoolIT Systems. This article explains how their latest systems push the limits of what’s possible at the chip, rack, and row levels of the data center, offering critical pathways to support the explosive growth in accelerated computing.

These announcements land as the Dell’Oro Group raises its forecast for data center physical infrastructure to $61 billion by 2029, citing not just stronger-than-expected 2024 results, but also growing momentum among Tier 2 cloud providers and telco-backed AI buildouts. With rack power densities jumping from today’s 15 kW to as high as 120 kW in AI deployments, all of this news is evidence that the race is definitively on to build thermal and power infrastructure that can scale without compromise.

Accelsius Hits 4,500W Milestone as Two-Phase Liquid Cooling Heats Up for AI Racks

As next-generation AI workloads drive up power densities across GPUs, servers, and full racks, the challenge of thermal management is rapidly becoming existential for data center design. This week, Accelsius, an Austin-based cooling startup with deep roots in two-phase thermal technologies, announced that its NeuCool platform has achieved industry-leading performance benchmarks for direct-to-chip liquid cooling—setting new thresholds for power, temperature resilience, and scalability.

In a series of R&D tests simulating next-gen AI workloads, Accelsius pushed its NeuCool cold plate to 4,500 watts on a thermal test vehicle designed to mimic a GPU socket. That figure is not only the highest documented load handled by any direct-to-chip cooling solution to date—it also signals thermal headroom for future AI accelerators and edge inference devices already headed toward 4,000W+ TDPs. Importantly, the test ended not because of thermal failure, but because the test infrastructure itself hit its power limit—a meaningful distinction.

The second milestone came at the rack level. Accelsius demonstrated that its in-row two-phase CDU, paired with retrofitted cold plates on a four-way H100 server, could cool a fully loaded 250kW rack even when fed with facility water at 40°C—well above conventional thresholds. Using 375 liters per minute (LPM) of flow and standard PG25 coolant, the system kept GPU junction temperatures below NVIDIA’s thermal throttle limit (~87°C) even under full load, validating the resilience of two-phase cooling in warm-water scenarios.

According to Accelsius, this capability to operate at 6–8°C higher inlet temperatures than typical single-phase systems could translate into over 25% energy savings for cooling, while also unlocking significantly more free cooling hours in diverse climates. The solution’s stability across 20°C, 30°C, and 40°C inlet water tests suggests the platform is not only robust today, but adaptable to emerging infrastructure trends like 600kW Kyber racks and vertically oriented servers.

“We’re showing customers that we can easily meet current performance requirements and scale our performance to meet the needs of the recently announced 600kW racks,” said Dr. Richard Bonner, CTO at Accelsius. “Our R&D team has also prepared us for rapidly evolving chip and server architectures, such as 4,500W TDP sockets and vertically oriented blade servers.”

Accelsius will showcase both test results and ongoing cold plate research at Data Center World (Booth #524) in Washington, DC (April 15–17), followed by a technical presentation at the OCP EMEA Summit in Dublin on April 29. The latter will focus on cooling vertically mounted server blades, an increasingly relevant design element for high-density enclosures like NVIDIA’s Vera Rubin Ultra.

In a statement timed with the announcement, Accelsius CEO Josh Claman emphasized the strategic significance of thermal innovation for the AI age:

Jensen Huang's keynote at GTC highlighted the tremendous innovation we are experiencing in AI. Recent breakthroughs touch every aspect of the industry and exponentially increase the necessity for the infrastructure supporting AI to evolve at the same pace. We are seeing the limitations of current AI cooling infrastructure and must invest in solutions that can meet and scale with these future requirements. As AI systems grow more complex and power-intensive, in data centers and edge locations, the industry must prioritize advancements in cooling technologies to ensure that innovation isn’t bottlenecked by outdated infrastructure. This was particularly apparent in NVIDIA’s announcement of Vera Rubin Ultra, which at 600 kW per rack, speaks to the continued innovation required to power and cool future AI workloads.

CoolIT Raises the Bar with 1.5MW Row-Based CDU to Power AI’s Thermal Future

The Accelsius announcement highlights just how rapidly liquid cooling solutions are evolving—but they're not the only ones staking out leadership in this critical space. Just weeks earlier, Calgary-based CoolIT Systems introduced the CHx1500, a new high-water mark for row-based coolant distribution units (CDUs), combining raw cooling power, compact form factor, and serviceability in a design aimed squarely at the needs of AI and HPC deployments.

With a peak cooling capacity of 1,500 kW, CoolIT contends that the CHx1500 stands as the highest-performing liquid-to-liquid CDU of its class.

Developed in close collaboration with hyperscalers and leading processor manufacturers, the unit delivers 1.2 liters per minute per kilowatt (LPM/kW) at a 5°C approach temperature difference (ATD)—enabling support for the most thermally aggressive deployments on the roadmap, including 9 x NVIDIA GB200 NVL72 racks.

“The CHx1500 sets the standard for CDU performance,” said Neil Mulji, Vice President of Product at CoolIT. “It provides the best cost per kW while delivering the features and functionality our OEM and hyperscale customers demand.”

Beyond brute force, CoolIT's latest CDU is optimized for density and pressure. A comparison chart provided by the company shows the CHx1500 outperforming other major CDUs on virtually every front:

  • 27% to 148% greater total cooling load than rival models.
  • 53% to 463% higher cooling load density (up to 1516 kW/m²).
  • 26% to 75% higher secondary pressure head, peaking at 44 psi.

And it does so in a tight physical footprint: 750mm x 1200mm—a single rack-sized unit that maintains front and back serviceability. The design includes hot-swappable critical components, built-in 25-micron filters, and redundant systems, combining uptime reliability with operational flexibility. Intelligent onboard controls dynamically regulate temperature, flow, and pressure, accessible through a 10-inch touchscreen or remotely via Redfish, SNMP, Modbus, TCP/IP, and other common protocols.

The CHx1500’s construction reflects CoolIT’s 24 years of DLC experience, using stainless-steel piping and high-grade wetted materials to meet the rigors of enterprise and hyperscale data centers. It’s also designed to scale: not just for today's most power-hungry processors, but for future platforms expected to surpass today’s limits.

Now available for global orders, CoolIT is offering full lifecycle support in over 75 countries, including system design, installation, CDU-to-server certification, and maintenance services—critical ingredients as liquid cooling shifts from high-performance niche to a requirement for AI infrastructure at scale.

Capex Follows Thermals: Dell’Oro Forecast Signals Surge In Cooling and Rack Power Infrastructure

Between Accelsius and CoolIT, the message is clear: direct liquid cooling is stepping into its maturity phase, with products engineered not just for performance, but for mass deployment.

Still, technology alone doesn’t determine the pace of adoption. The surge in thermal innovation from Accelsius and CoolIT isn’t happening in a vacuum. As the capital demands of AI infrastructure rise, the industry is turning a sharper eye toward how data center operators account for, prioritize, and report their AI-driven investments.

To wit: According to new market data from Dell’Oro Group, the transition toward high-power, high-density AI racks is now translating into long-term investment shifts across the data center physical layer. Dell’Oro has raised its forecast for the Data Center Physical Infrastructure (DCPI) market, predicting a 14% CAGR through 2029, with total revenue reaching $61 billion.

That revision stems from stronger-than-expected 2024 results, particularly in the adoption of accelerated computing by both Tier 1 and Tier 2 cloud service providers. The research firm cited three catalysts for the upward adjustment:

  • Accelerated server shipments outpaced expectations.
  • Demand for high-power infrastructure is spreading to smaller hyperscalers and regional clouds.
  • Governments and Tier 1 telecoms are joining the buildout effort, reinforcing AI as a decade-long infrastructure wave.

The report singles out thermal management as a defining pivot point. While average rack densities still hover around 15 kW, AI workloads are pushing requirements into the 60 to 120 kW range—well beyond the reach of traditional air cooling. As Dell’Oro founder Tam Dell’Oro noted, “The biggest change is unfolding in thermal management – the transition from air to liquid cooling.”

That transition is already materializing in the product strategies and R&D roadmaps of vendors like Accelsius and CoolIT. Whether it’s the 4,500W socket-level tolerance demonstrated by Accelsius or the 1.5 MW rack-scale CDU performance of CoolIT’s CHx1500, the new generation of liquid cooling systems is being engineered to align directly with the rack-level demands cited by Dell’Oro.

The report also highlights geographic diversification, with North America, EMEA, and Asia Pacific (ex-China) leading growth.

Meanwhile, the study indicates that colocation providers—long relegated to trailing innovation curves—are now poised to take a central role in hosting inferencing infrastructure. This shift underscores the growing importance of flexible, serviceable, and efficient cooling platforms that can be deployed rapidly in shared environments.

From Cooling to Capex: The Infrastructure Flywheel is Spinning Up

Together, these announcements and forecasts underscore a broader thesis taking shape across the AI infrastructure ecosystem: Thermal and power innovations are no longer trailing indicators of IT change—they are leading enablers of what's next.

Liquid cooling is no longer just a specialty tech for labs and proof-of-concepts. It is now an investment-grade infrastructure category, validated by performance data, embraced by hyperscalers, and tracked in five-year market forecasts. For OEMs, colos, and cloud providers, the question is no longer whether to adopt advanced cooling, but how fast they can standardize it across their portfolios.

And with 600kW racks and vertically oriented servers coming into view, the pressure is quite literally on.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT4.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

About the Author

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.

Sponsored Recommendations

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...
AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...
After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...
Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Adobe Stock, courtesy of Radix IoT
Source: Adobe Stock, courtesy of Radix IoT
Michael Skurla, co-founder of Radix IoT, explains how active, intelligent monitoring can extend data center lifespans while unlocking hidden capacity for AI innovation.

White Papers

Get the full report
Feb. 12, 2022
Microgrids, which use controllers to connect multiple power generation and storage sources, can provide electric reliability but they can also be too complex and costly for businesses...