Progress on Aisle Containment for Data Center Cooling

Jan. 7, 2016
In today’s discussion, our panel of three data center executives – Jakob Carnemark of Aligned Data Centers, Robert McClary of FORTRUST, and James Leach of RagingWire Data Centers – will examine progress in data center cooling strategies using aisle containment.

Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of three experienced data center executives – Jakob Carnemark of Aligned Data Centers, Robert McClary of FORTRUST, and James Leach of RagingWire Data Centers – will examine progress in data center cooling strategies using aisle contaiment. The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.

Aisle Containment As A Data Center Cooling Strategy

Data Center Frontier: Airflow containment is an established strategy for optimizing data center cooling. How would you assess the adoption of containment? Does it differ between the hyperscale, enterprise and multi-tenant markets?

Robert McClary, FORTRUST

Robert McClary: Here’s the key: it’s inefficient any time hot and cold air mix in a data center and it’s not across the heat sinks of IT equipment and hardware, because you’re doing twice the work. Air containment and airflow dynamics are absolutely strategies for optimizing data center cooling. You want the cooling going to the hardware and across the hardware’s internal cooling elements in the most effective and efficient way. Cooling that goes anywhere else just starts to breed inefficiency.

We should start designing and building data centers that are capital, energy and resource efficient and are designed for long term operations. We do that by thinking about the IT hardware that the customer uses, all the way up through to the utility source. Everything must be optimized and able to adapt to what is occurring at the end-user hardware. We don’t design with the end-user or the IT equipment stack in mind when we design data centers and we have to start doing that.

James Leach: We are seeing a shift in the data center cooling conversation from economization to containment.

Over the last five years, the cooling conversation was between data center providers and data center technology suppliers to improve the performance of the data center by designing and implementing sophisticated economization systems that take advantage of local weather conditions. The most common example is using free air cooling when the outside temperature and humidity allows.

James Leach, RagingWire Data Centers

Today, the cooling conversation is between data center providers and data center buyers to design and implement targeted containment systems to meet the unique requirements of the customer. This discussion is typically based on CFD analysis (computational fluid dynamics) to understand air flows within the data center facility and the customer’s computing environment. Currently, the most common approaches are cold-aisle containment for newer data centers with high ceilings and good air flow, and hot-aisle containment (chimneys) for older data centers with lower ceilings that need to force the warm air out of the building.

The difference between hyperscale, enterprise, and multi-tenant containment adoption is largely driven by the design of the data center and the nature of the applications.

Hyperscale data centers tend to be optimized for well-defined, consistent systems configurations where the targeted cooling systems are built-in. In these environments, we are seeing the emergence of liquid cooling, in-chassis cooling, in-rack cooling, and rear-door heat exchangers.

Enterprise data centers typically must support legacy systems such as mini-computers and mainframes that have specialized cooling requirements. These facilities are often older (greater than 10 years) and may not have been maintained or upgraded over time so that they are challenged to support higher density deployments. We tend to see hot-aisle containment systems in these environments as a “bolt-on” to legacy systems.

Multi-tenant colocation data centers are typically newer (less than 10 years old) and have been upgraded over time to support higher density deployments. Colo providers should work closely with their customers to support containment systems tailored to their unique environments.

Jakob Carnemark, Aligned Data Centers

Jakob Carnemark: For many data center operators the cost to renovate an existing site to take advantage of new containment technology is prohibitive. When it comes to reducing energy consumption, each watt of waste translates into lower service margins.

Hyper-scale operators and some progressive enterprise users are acutely aware of this and have spent the capital to ensure their data centers are optimized for delivering the greatest output with the least waste.

Multi-tenant operators tend to suffer the most as they have less control over the equipment and applications running inside their data centers. For them, containment can prove a challenge.

We recognized this hurdle early on and engineered our data centers with a patented and proven heat removal technology that consumes up to 85 percent less water with a nominal power draw. As a result, we are able to guarantee our clients a 1.15 PUE. For some of our large clients, the cost savings will be substantial.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Image courtesy of Submer
Image courtesy of Submer

The Rebound Effect in the Data Center Industry: How to Break the Cycle

Nathalie Cruchet, Head of Sustainability & ESG at Submer, explains why tackling the rebound effect is essential for achieving true sustainability.

White Papers

DCF media kit 2022

Data Center Frontier Media Kit

Oct. 16, 2021
Data Center Frontier is ideal for companies that want to be seen as a thought leader in the data center industry. The programs include opportunities to build awareness, submit...