Will high-density, liquid-cooled racks become more common in data centers? That’s today’s topic as we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – Iron Mountain’s Michael DeVito, Chris Sharp of Digital Realty, Kristen Kroll-Moen from Chatsworth Products, Intel’s Jeff Klaus, Gary Niederpruem of Vertiv, and Amber Caramella of Netrality and Infrastructure Masons – discuss trends in rack power density and their implications for design and operations.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.
Data Center Frontier: Last year several studies indicated that rack density level is increasing. How might the growing use of artificial intelligence and edge computing impact rack density and the world of data center cooling?
Gary Niederpruem: What we are seeing from our customers is that average rack densities aren’t rising dramatically for many operators. But in select applications, such as AI, extremely high densities are becoming more common. That’s created an interesting environment where you have some users with relatively stable densities and some with densities that weren’t even practical 10 years ago. This is occurring both within the core data center and on the edge.
From a cooling perspective, operators need the ability to efficiently cool facilities that support standard density racks (5 kW to 8 kW), high density racks (30 kW and higher), and have some mix of both. Key to accomplishing that is being able to adapt the right cooling technology to the application.
Vertiv is enabling this by offering a range of different technologies that include air-based, compressor-based and liquid-based cooling. That has given high-performance computing facilities, for example, the flexibility to design racks around their specific requirements and then adapt the appropriate cooling technology rather than designing racks based on the capacity of their cooling system.
Kristen Kroll-Moen: As rack density increases, airflow management becomes critical in traditional air-cooled facilities. In the late 2000s, the industry was projecting significant rack density increases. CPI conducted lab tests to observe and verify performance of airflow management technology and we discovered that air cools very dense racks, up to 30 kW, with disciplined airflow management. Since then, there have been more hardware improvements: hard drives have transitioned to solid state, power supplies have increased in efficiency, and equipment operating ranges have widened.
These improvements provide even more opportunity to accommodate new computing with traditional airflow cooling models. Where space is limited, perhaps in edge sites or where there is a high-density compute node, an alternative is to shift to liquid cooling, either supplemental indirect cooling, or direct liquid cooling. The challenge for enterprise operators is the availability of off-shelf direct liquid cooled solutions.
Amber Caramella: As artificial intelligence (AI) and edge computing proliferate, rack density will continue to increase. More data-intensive workloads naturally require more compute power, which increases the amount of electricity used by servers and the amount of heat the servers produce. This makes powering and cooling data centers more expensive and increases their carbon footprint.
Conventional data center cooling methods will not scale under the processing demands of AI, 5G wireless, Internet of Things (IoT), and the rise of Smart Cities. Luckily, there are a growing number of new approaches to cooling that actually reduce energy consumption and costs. As data center needs transform, dynamic infrastructure is needed to respond to high, mixed and variable power densities, enabling environments to evolve without stranding capacities. Newer cooling systems are being developed that are purpose-built for data centers.
These and other promising new cooling methods will be needed to ensure that data centers can keep up with data processing demands and become more green and sustainable.
Chris Sharp: With the explosion of AI and IoT in the enterprise, data gravity has become one of the biggest challenges to successful digital transformations. The explosive growth of data means it is now heavier, denser, and more expensive to move. At the same time, these new technologies require data centers to support a higher level of computational power, electricity usage, and heat generation, requiring specialized power and cooling techniques that aren’t available in the enterprise basement.
Organizations across all industries are using AI to meet business challenges and increase efficiency. It’s also important to note that not every colocation facility is prepared to support these compute-intensive technologies in a multi-tenant environment. Per-rack power demands for AI can easily and regularly do exceed what standard data centers can deliver. To put into context, the average kilowatt per rack is around 7kW, but with AI, applications can pull more than 30 kW per rack. As per-rack power demands rise, so does the need for highly efficient cooling.
One of the ways data center operators are addressing these data-intensive technologies is through various next-generation cooling technologies, including liquid cooling and direct air cooling. While it isn’t the right fit for all workloads, liquid cooling enables ultra-high-density equipment be deployed in otherwise low- or medium-density facilities, essentially retrofitting a data center for future applications.
As part of our commitment to lead the data center into the future, we’re partnering with companies like Submer Technologies to help customers evaluate the potential applications of new cooling technologies and support their future data center infrastructure needs.
Michael DeVito: At the high level, AI requires a lot of processing. It must go through many different data sets quickly and requires algorithms capable of processing that data.
Along with the demand for low latency and high processing comes greater compute capability. This will increase the power required at the rack level. Once you increase power at the rack level, you must be able to cool it.
As the need for greater cooling grows, data centers must ensure we are doing this efficiently. As we consume more and more cooling, we need to be cognizant of any waste and operate in the most sustainable manner possible. Finding sustainable sources of power are key.
Jeff Klaus: AI and edge are driving a lot of growth, although I see the density increase is more a function in the advancement in cooling and materials allowing for robust deployments. AI and Edge do benefit from better density, because it reduces the overhead from sprawl.
NEXT: Our panel discusses the data center industry’s progress on diversity.
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below: