Data Centers Adapt as Tenants Seek Options on Power Redundancy

Dec. 10, 2015
Several wholesale data center providers unveiled plans this week to offer data halls with less backup power infrastructure than traditional data centers, which typically offer multiple levels of emergency power in the event of a utility grid outage.

LAS VEGAS, Nev. – Not every server is mission-critical. Not every data center needs to be, either.

This week several service providers unveiled plans to offer data halls with less backup power infrastructure than traditional data centers, which typically offer multiple levels of emergency power in the event of a utility grid outage.

DuPont Fabros Technology will begin to offer a variety of levels of power redundancy in its data center leases, including an option for data halls supported by grid power (known as an “N” configuration). DFT, one of the largest players in the wholesale space, discussed its plans at the Gartner Data Center Summit at the Venetian in Las Vegas.

DFT is not alone in pursuing this design approach, known as variable resiliency. Verne Global, which operates a data center in Iceland, has just unveiled a new offering known as powerDIRECT, which offers low resiliency hosting for applications that need to lots of power density, but can afford brief service interruptions. Sentinel Data Centers and Vantage Datacenters have also leased multiple megawatts of low-resiliency space this year, following similar moves by CyrusOne and C7 Data Centers.

The demand for variable resiliency space reflects growing sophistication among data center customers, who are seeking to manage their data center costs. Many will pay top dollar to support mission-critical applications and storage, but desire more affordable options for non-critical workloads. The customer appetite for space with less power infrastructure offers a challenge for data centers, who often build entire data centers with a consistent level of redundancy in their power distribution systems.

The Power of Ns and Tiers

Power is the lifeblood of any data center. Some terminology is helpful in understanding how these facilities deliver power to customers.

In data centers, N refers to the minimum amount of power infrastructure needed to deliver online services, usually conditioned utility power. In design terms, N+1 and N+2 are used to describe configurations in which the primary power distribution system is supported by redundant equipment to provide backup power during utility outages, like a UPS (uninterruptible power supply). A 2N configuration offers fully redundant power paths from the grid to the server.

The redundancy options are outlined by the Tier System, a rating developed by the Uptime Institute that define four tiers of infrastructure design. Most demand for enterprise data centers is focused in the N+1 range, roughly equivalent to a Tier III facility. In recent years, hyperscale providers have begun to focus on redundancy in their networks, routing around data center equipment failures rather than firing up backup equipment.

“You have providers building for 2N+2 (extreme redundancy) and others for N,” noted Bob Gill, Research Director at Gartner. “They’ve got to offer varying resiliency and density levels. We see providers bringing data centers online with new and unique offerings.”

Most demand is focused in the N+1 or N+2 range, roughly equivalent to a Tier III facility in the Uptime Tier Ratings.

More Power Options at DFT

DuPont Fabros, which operates 3 million square feet of data center space across the U.S., is introducing a new design as it fine-tunes its business strategy under new CEO Chris Eldredge, who took the helm in February. The company is introducing new lease options, moving beyond its traditional triple-net structure, and is updating its design to offer broader ranges of power density and resiliency.

“We’re seeing people asking for N,” said Scott Davis, Executive Vice President of Data Center Operations for DuPont Fabros. “We’ve designed a backbone that can serve (variable) into our compute rooms.”

Two huge data centers on the DuPont Fabros Technology cloud campus in Ashburn, Virginia. (Photo: Rich Miller)

DuPont Fabros has always built big, with phases of up to 400,000 square feet of space. The new design will deploy capacity in 150,000 square foot chunks, which will be subdivided into data halls of 8,000 to 10,000 square feet, each configured for the customer’s power reliability requirements.

“It is slightly more complicated to provide a variety of products,” said Davis. “It’s going to add more options, but also some more complexity. We’re going to have to change the way we do some things. We’ll have solutions already designed and ready to go for different configurations, with pre-established terms and pricing. But we may need to have some inventory of equipment on hand (especially power gear such as UPS units, PDUs and busways).”

Davis said DFT will also offer broader options on power density, responding to broader ranges of customer workloads. “In our portfolio, we see a big gap between our 3kW per cabinet customers and our 15kW customers,” said Davis.

The new design has been in development for about six months, and its likely to be rolled out in DuPont Fabros’ new data center in Toronto.

HPC Drives New Thinking at Verne Global

Verne Global, which operates data centers in Iceland, is building a new facility near Reykjavik with a variable resiliency design. Tate Cantrell, the CTO of Verne Global, said the shift was a response to growing demand for high-performance computing (HPC) workloads that need lots of power and cooling, but aren’t mission-critical.

“HPC is putting pressure on data center design,” said Cantrell. “HPC has forced a case study in data center design for lower resiliency and higher density space, fundamentally spurring challenges in mechanical and electrical engineering design of traditionally ‘purpose built’ data centers.

“In the next 12 months, data center operators will be increasingly challenged with the task of delivering power to support both an HPC environment as well as traditional storage environments all under one roof. HPC will continue the trend to low resiliency options.”[clickToTweet tweet=”Verne Global’s Tate Cantrell: HPC will continue the trend to low resiliency options.” quote=”Verne Global’s Tate Cantrell: HPC will continue the trend to low resiliency options.”]

Verne Global’s powerDIRECT product will offer an N design that can still support high density. In addition to supporting HPC and batch processing, the design may also be of interest to bitcoin specialists, whose power profile has led a number of firms to operate in Iceland, which offers a rich supply of green geothermal power.

Several bitcoin miners have built “hashing centers” featuring high-density hardware, low-reliability electrical infrastructure and off-the-shelf enclosures. Some service providers have adapted their designs to address the requirements of cryptocurrency customers.

Last year C7 Data Centers in Utah developed an “N-0.5” design offering high-density space with lower reliability and no service-level agreement. The room is cooled using ambient air and cold-aisle containment, with no UPS or generator backup.

CyrusOne’s Phoenix data center hosts a large customer running high-density equipment in immersion cooling tanks, delivering the space at a cost of $1.5 million per megawatt, significantly below the $7 million per megawatt the company typically spends on enterprise data center space with “five nines” of uptime. The company hasn’t identified the tenant, but it fits the profile of a cryptocurrency specialist.

Sentinel: Built for Flexibility

Variable resiliency is nothing new for Sentinel Data Centers, which operates large wholesale data centers in New Jersey, New York and North Carolina, and has offered reliability options for years.

“We believe we differentiate through flexibility,” said Todd Aaron, President of Sentinel. “We wanted to be able to offer a low reliability solution alongside the Tier III space, so our capacity is delivered in a topology where we can vary the reliability. We’ve always divorced physical space from capacity. None of our capacity is defined by power infrastructure, so we’re not constrained by the physical room.”

Aaron said interest in lower resiliency options has surged in the past 12 months. “By the end of this year, we’ll have about 5 megawatts of ‘transmission plus’ infrastructure online,” said Aaron. “All of the deals have been different use types.”

Sentinel’s view is that this approach won’t work everywhere. With more customer capacity relying on grid power, the reliability of the local utility becomes much more important.

“It’s all a function of having a robust utility and architecting your upstream power requirements,” said Aaron. “We believe our utility paradigm is the only way we can do this well. It’s one of the reasons we believe in investing in dedicated substations.”

Utility reliability was a factor in Facebook’s decision to build its first European data center in Lulea, Sweden, where the extraordinary sturdiness of the hydro-powered local grid allows it to operate with fewer generators. Similar concepts are being applied to a project from Salt River Project and BASELAYER, which connects the data center directly to bulk transmission lines and substations, whose higher reliability could allow cloud builders to operate without expensive backup equipment.

Specialty Niche or the Way of the Future?

As some providers customize offerings for variable resiliency, others are being more cautious. “You have fringes of the market interested in either end of the resiliency spectrum,” said Brian Warren, SVP of Engineering for CoreSite, which operates 2.8 million square feet of data center space. “The folks we’ve seen asking for lower resiliency might be enterprises doing testing and development work, or cloud service providers who have layers of resiliency built into their network architecture.

“If there’s space we’re going to develop and an enterprise wants a testing and development environment, we can work with that,” Warren continued. “But it’s situational, and not something we’re architecting for. The vast majority of the market demand is for dual distribution power.”

Others predict demand will continue to be evolve. Cantrell of Verne Global says customers are becoming much more focused on applications, a trend that has long-term implications for data center design.

“Purpose built data centers are a thing of the past,” said Cantrell. “It’s no longer scalable nor is it flexible enough for data center operators to build infrastructure by application. The new approach will somewhat resemble the object oriented movement of programming where tool sets respond to on-demand needs.

“The paradigm for data center design will be to build a backbone of resiliency and power infrastructure that responds dynamically to the requirements of specific applications,” Cantrell continued. “This allows multiple applications to reside in the same location regardless of their redundancy and environmental control requirements. Data center designers are increasingly responsible for business delivery and efficiency and we don’t want to build infrastructure by application.”

In the meantime, don’t mistake “variable resiliency” as customers being less concerned less about outages.

“Even when they say they’re OK with lower reliability, they still want uptime,” said Aaron.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Image courtesy of EXFO

Navigating the Future: Upgrading Networks in Data Centers for 400G  

Nicholas Cole, Data Center Solution Manager at EXFO, explains why the journey towards 400G and beyond is not merely about keeping pace but also ensuring that every step forward...

White Papers

Get the full report.

From Console to Cloud

April 7, 2022
This white paper from Iron Mountain explores the current challenges, drivers, and opportunities for gaming digital infrastructure.