The Immersion Supercomputer: Extreme Efficiency, Needs No Water

July 6, 2015
A supercomputer immersed it in tanks of liquid coolant? This sci-fi scenario has created a real-world scientific computing powerhouse.

It sounds like science fiction: Take a supercomputer and immerse it in tanks of liquid coolant, which must be kept cool with the use of water. This sci-fi scenario has created a real-world scientific computing powerhouse.

The Vienna Science Cluster uses immersion cooling, dunking SuperMicro servers into a dielectric fluid similar to mineral oil. Servers are inserted vertically into slots in the tank, which is filled with 250 gallons of ElectroSafe fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.

The system has emerged as one of the world’s most efficient supercomputers, as measured by Power Usage Effectiveness (PUE), the leading metric for the efficiency of data center facilities. The Vienna Science Cluster 3 system touts a mechanical PUE of just 1.02, meaning the cooling system overhead is just 2 percent of the energy delivered to the system. A mechanical PUE doesn’t account for energy loss through the power distribution system, which means the actual PUE would be slightly higher.

The end result: 600 teraflops of computing power uses just 540 kilowatts of power and 1,000 square feet of data hall space.

“We are very impressed by the efficiency achieved with this installation,” said Christiaan Best, CEO and founder of Green Revolution Cooling, which designed the immersion cooling system. “It is particularly impressive given that it uses zero water. We believe this is a first in the industry.”

Why Liquid Cooling Matters

Liquid cooling can offer clear benefits in managing compute density and may also extend the life of components. The vast majority of data centers continue to cool IT equipment using air, while liquid cooling has been used primarily in high-performance computing (HPC). With the emergence of cloud computing and “big data,” more companies are facing data-crunching challenges that resemble those seen by the HPC sector, which could make liquid cooling relevant for a larger pool of data center operators.

Last fall at the SC14 conference, a panel of HPC experts outlined their expectation for a rapid expansion for liquid cooling that may extend beyond its traditional niches. At Data Center Frontier we’ll be tracking this transition, and keeping readers posted on relevant innovations in liquid cooling, such as the water-less implementation in Vienna.

These enclosures house the 2,020 compute nodes of the Vienna Scientific Cluster 3, which are immersed in liquid coolant. (Photo: Green Revolution Cooling)

The Vienna Scientific Cluster combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water.

Water management is a growing priority for the IT industry, as cloud computing is concentrating enormous computing power in server farms supported by cooling towers, where waste water from the data center is cooled, with the heat being removed through evaporation. Most of the water is returned to the data center cooling system, while some is drained out of the system to remove sediment.

The fluid temperature in the immersion tank is maintained by a pump with a heat exchanger, which is usually connected to a standard cooling tower. The Vienna Scientific Cluster uses a closed loop dry cooler as the final method of heat rejection, requiring no water at all. Energy use may rise slightly in the summer, but should still remain near the 1.1 to 1.2 level seen among leading hyperscale data centers.

The novelty of the Vienna design is that it combines a water-less approach with immersion cooling, which has proven effective for cooling high-density server configurations, including high-performance computing clusters for academic computing, seismic imaging for energy companies, and even bitcoin mining.

Breaking the CRAC Habit

While not seen often in today’s enterprise and cloud data centers, liquid cooling isn’t new. If you’ve been around the industry for a few years, you’ll recall the days when water-cooled mainframes were standard in corporate data centers. But that soon shifted to racks of servers cooled by air using the familiar “hot aisle/cold aisle” design seen in most data centers today, with water chilling loops confined to the air handlers and “CRACs” (computer room air conditioners) housed around the perimeters of the data hall.

The alternative is to bring liquids into the server chassis to cool chips and components. This can be done through enclosed systems featuring pipes and plates, or by immersing servers in fluids. Some vendors integrate water cooling into the rear-door of a rack or cabinet.

Immersion takes a different approach, sinking the equipment in liquid to cool the components.

Green Revolution Cooling is seeking to build awareness of the benefits of immersion cooling through displays such as this exhibit at SC14 in New Orleans. (Photo: Rich Miller)

Green Revolution has been in the forefront of the recent resurgence of interest in immersion. In addition to supporting extreme power density, immersion cooling offers potential economic benefits by allowing data centers to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. It also eliminates the need for server fans, which can also be power hogs.

The VSC-3, was installed in 2014, with Green Revolution Cooling working with Intel, ClusterVision, and Supermicro. It supersedes the VSC-2 cluster, which used a rear-door cooling solution that achieved a mechanical PUE of 1.18. VSC-3 features 2,020 compute nodes, each with 16 processor cores housed in the CarnotJet tanks.

The Cost Component of Cooling

Liquid cooling often requires higher up-front costs, which can be offset by savings over the life of a project. Economics were a key driver for the Vienna design.

“The value proposition (of the GRC system) was extremely impressive,” said Christopher Huggins, Commercial Director at ClusterVision, a leading European HPC specialist. “The whole data center and cluster was far less expensive than it would have been with any other cooling solution on the market. We are certain we will be using the GRC solution on more projects in the future.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Anggalih Prasetya/Shutterstock.com
Source: Anggalih Prasetya/Shutterstock.com

AI in the Data Center: Building Partnerships for Success

Wesco’s Alan Farrimond explains how the right partnerships can help data centers overcome the barriers to growth and meet the demands of AI.

White Papers

Dcf Afl Sr Cover 2023 01 09 12 22 13

How Four Efficiency and Density Points are Reshaping Data Center Network Architecture

Jan. 9, 2023
In a world connected by digital infrastructure, new network considerations must be taken to ensure optimal performance, efficiency, and sustainability