As Power Densities Rise, Providers Lead With Cooling

April 14, 2017
New hardware for cloud computing and machine learning is bringing beefier workloads into the data center, creating opportunity for service providers specializing in high-density cooling.

SANTA CLARA, Calif. – If the data center industry had a Facebook profile, its relationship with density would be …. “It’s complicated.”

New hardware for cloud computing and machine learning is bringing beefier workloads into the data center. Meanwhile, overall rack densities are trending slightly higher, but not at the rate all this new data-crunching might suggest.

The disconnect, according to Ben Coughlin, is created by the industry’s conservative approach to managing space and compute power. Coughlin, the co-founder and Chief Financial Officer of Colovore, says many data center managers are spreading out rather than densifying – breaking up workloads across more cabinets rather than trying to manage fewer racks that are fully loaded with IT equipment.

“If you compressed the IT load into those empty spaces (in cabinets), you’d find much higher rack densities,” said Coughlin.

Focus on ‘Footprint Efficiency’

Colovore is seeking to address this problem of “footprint efficiency” with a Santa Clara colocation facility designed to support higher rack densities. “We came to market with the idea of ‘what if people could fill their racks,’ ” said Colovore President and co-founder Sean Holzknecht. “Our goal is to build the density that allows people to fill their racks again. People say ‘the density thing didn’t happen.’ In a lot of cases, it’s because people aren’t filling their racks to the top.”

The Colovore team says the growth of cloud computing and artificial intelligence (AI) is bringing more high-density workloads into data centers, and customers are more willing to put more equipment into their cabinets and seek out facilities that can cool them. This issue is coming to the fore in markets with a limited supply of data center space, like Santa Clara, placing a premium on getting the most mileage out of every rack.

“We see plenty of opportunity,” said Coughlin, who said Colovore has filled its first two megawatts of capacity and is about to bring another two megawatts online. The first phase was designed to handle power densities of 20 kilowatts (kW) per cabinet, but the new space will be engineered for 35 kW per cabinet.

“We’ve been pushed on this,” he said. “Some of the highest performing clients are pushing higher densities.”

Most of these customers aren’t pushing Colovore to its limits. Rack densities average 11 kW at present, with some clients hitting 18 kW. Coughlin expects that average to rise to 13 kW in the next phase, and some customer racks my approach 30 kW or more.

Cloud, AI Changing the Game

Over the past decade, there have been numerous predictions of the imminent arrival of higher rack power densities. Yet extreme densities remain limited, primarily seen in HPC. The consensus view is that most data centers average 3kW to 6kW a rack, with hyperscale facilities running at about 10kW per rack.

That’s changing with the emergence of power-intensive hardware like graphics processing units (GPUs) and other specialized chips for AI and machine learning workloads.

Colovore is among a small but growing number of multi-tenant data center providers that are optimizing for high density customers. “As the industry adapts to these designs, customers will use it,” said Peter Harrison, Colovore’s Chief Technical Officer. “The customers don’t value engineering because the (data center) industry isn’t adapting as fast as they are.”

Colovore’s Santa Clara data center uses racks equipped with water-chilled rear-door cooling units from Vertiv (previously Emerson Network Power). LinkedIn is using rear-door cooling units from Motivair for its new facility near Portland, while newcomer Nautilus Data Technologies is planning to offer  racks with rear-door heat exchangers from ColdLogik.

Colo Providers Lead With Containment

Several large colocation providers have been working the high-density space for years. These include the Switch SUPERNAPs in Las Vegas (and now Reno and Michigan as well) and CyrusOne, which built its business by hosting seismic exploration data processing for oil and gas firms in the Houston area.

Another colocation company that sees an opportunity in high-density workloads is ViaWest, which operates 29 data centers across the Western U.S. and Canada.

“There are applications that require this kind of high-density cooling, and we know that densities are going to increase,” said Dave Leonard, the Chief Data Center Officer at ViaWest. “We especially see the emerging use of hyper-converged computing architectures, which are capable of extremely dense deployments. We want to provide maximum flexibility for our clients so they can take any path in computing infrastructure that they want to.”

The primary high-density strategy for most multi-tenant providers is containment, which creates a physical separation between cold air and hot air in the data hall. ViaWest is taking a slightly different approach that combines partial containment with instrumentation and monitoring.

The company created a test cage outfitted with load banks (equipment that simulates an electrical load for purposes of testing and commissioning) at its Plano, Texas data center to demonstrate its capabilities.

“The cold aisle air is managed in a few key ways,” said Leonard. “We have standard cabinet blanking in place to avoid hot and cold air mixing through the cabinets. We put a door on the cold aisle end to avoid hot/cold air mixing around the end of the cabinet rows, and we put a two-foot high bathtub (perimeter) around the top of the cold aisle to keep the cold air in the cold aisle, and not allow hot/cold air mixing across the top of the rows. We also used a six-foot cold aisle with floor tile grates to allow enough CFM of cold air into the cold aisle.”

ViaWest leaves the top of the cold aisle open, rather than enclosing it with ducting, as is the case with many high-density solutions. The company monitors air pressures and temperature at multiple points within the environment to ensure the equipment is being cooled properly.

The ViaWest approach brings higher density in each rack, but also uses more space than standard racks, as it features a six-foot wide cold aisle.

Cabinet-Level Cooling Solutions

Several high-density specialists are using custom cabinets, a group that includes San Diego provider ScaleMatrix, which effectively shrinks the data center into a single cabinet By containing and compartmentalizing workloads.

The ScaleMatrix Dynamic Density Control cabinet is an extra-wide enclosure with two compartments: a bottom compartment housing a four-post rack (with extra depth to accommodate HPC servers), and a top compartment containing a heat exchanger. Air is recirculated within the cabinet, with cool air delivered to server inlets via an 8-inch air plenum in the front of the rack. When exhaust heat exits the back of the equipment, it rises to the cooling compartment and is cooled and recirculated.

GPU hosting specialist CirreScale hosts some equipment with ScaleMatrix, but also has patented a design for a vertical cooling technology in which cold air from under the raised floor enters the bottom of the cabinet directly through a perforated tile. It then flows vertically through the cabinet and is exhausted into a ceiling plenum.

Servers immersed in a liquid cooling solution from Green Revolution Cooling. (Photo: Green Revolution)

Over the past year we’ve profiled several companies developing liquid or refrigerant cooling systems for ultra-high densities. The vast majority of data centers continue to cool IT equipment using air, while liquid cooling has been used primarily in high-performance computing (HPC). With the emergence of cloud computing and “big data,” more companies are facing data-crunching challenges that resemble those seen by the HPC sector, which could make liquid cooling relevant for a larger pool of data center operators.

Some recent examples:

  • Ebullient Cooling is cooling processors using Novec 7000, a liquid coolant from 3M that has proven popular in immersion cooling solutions for the bitcoin market. Instead of dunking servers in a bath, Ebullient is delivering the dielectric fluid directly to the processor, using a piping system to bring the liquid inside the server chassis.
  • Green Revolution Cooling submerges servers into a dielectric fluid similar to mineral oil. Servers are inserted vertically into slots in the tank, which is filled with 250 gallons of ElectroSafe fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.
  • The Aquarius server from Aquila Systems, a new server offering warm-water liquid cooling for hyperscale data centers using Open Compute designs. The Aquarius system uses a cooling design by Clustered Systems using water piping to cool a flat plate atop the components.

This all adds up to a growing universe of designs and options for data center operators grappling with the growth high-density workloads. Given the varying use cases and appetites for risk, the industry’s approach is likely to remain diverse and complicated for some time to come.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

iStock Photo, courtesy of EdgeConneX
Source: iStock Photo, courtesy of EdgeConneX

Opportunity Awaits: Asia-Pacific's Surge in Data Center Development and Innovation

Phillip Marangella, Chief Marketing and Product Officer at EdgeConneX, explores data center development opportunities in the Asia-Pacific region.

White Papers

Dcf Afl Sr Cover 2023 01 09 12 22 13

How Four Efficiency and Density Points are Reshaping Data Center Network Architecture

Jan. 9, 2023
In a world connected by digital infrastructure, new network considerations must be taken to ensure optimal performance, efficiency, and sustainability