The Cloud is a Catalyst and It’s Also a Competitive Benchmark

May 24, 2021
The cloud is not only a catalyst. Because it’s a great option for most applications, it has become a competitive benchmark. Enterprise IT leaders must be good stewards of their company’s resources by deploying and managing infrastructure in facilities that meet the same benchmarks as the cloud service providers do. Insights from Chris Bair of Stream Data Centers.

In this edition of Voices of the Industry, Chris Bair Senior Vice President, Sales and Leasing at Stream Data Centers explores how the cloud is both a catalyst for change for data centers, as well as a competitive benchmark.

Chris Bair, Senior Vice President of Sales and Leasing at Stream Data Centers

The answer to the question, “Where should this application live?” is always “It depends.” It depends on the end users (in particular, where they are), on the application architecture, and on regulatory constraints governing the application. It also depends on the total cost of ownership, flexibility, performance, and ‘righteousness’ offered by the various prospective application hosts.

Gartner analyst David Cappuccio put it well: “Workload placement is not only about moving to the cloud. It is about creating a baseline for infrastructure strategy based on workloads rather than physical data centers.”

In opening more possibilities for where applications can live, the cloud has served as a catalyst for evolution in on-prem and colocation data centers. That’s the convergence between enterprise and hyperscale requirements I described in my last missive here.

But the cloud is not only a catalyst. Because it’s a great option for most applications, it has become a competitive benchmark. Enterprise IT leaders are no longer only competing against the TCO, flexibility, performance, and righteousness standards set by their direct competitors. They also must be good stewards of their company’s resources by deploying and managing infrastructure in facilities that meet the same benchmarks as the cloud service providers do.

The metrics that matter – it’s no longer about proximity to HQ

It used to be that proximity (e.g. to corporate headquarters) was an enterprise infrastructure leader’s top criterion in site selection decisions. Hyperscale data center operators were the first to eschew that tradition in favor of application-level resiliency relying on multiple sites within a particular geographic region. Because of globally fantastic network performance and cost, the application no longer needs to be proximate to its end users to deliver great user experience.

Instead, decisions about where the application should live, and with whom, can be based on:

  • Total cost of ownership (which is impacted by land cost and/or rent, energy cost, labor cost, and tax incentives)
  • Flexibility (especially with regard to scale)
  • Performance (which is impacted by resiliency and connectivity)
  • ‘Righteousness’ (namely, energy efficiency and use of renewables) – this is about doing the right thing for the planet and also maintaining a positive market perception

Benchmarking on total cost of ownership

A list of the top data center markets in the U.S. today makes clear the divergence between proximity to corporate headquarters and data center demand:

(Source: Stream Data Centers)

Of course, TCO is not the sole driver of site selection decisions. In Silicon Valley, for example, data centers continue to flourish despite a relatively high TCO. The point is that there is no longer a correlation between the location of large corporate headquarters and the location of those enterprises’ data centers.

The rise of Northern Virginia and decline of the New York tri-state area as data center hubs serve as perfect illustrations of the prominence of TCO over proximity as a location determinant – now that low-latency, low-cost network connectivity is so easy to come by.

NoVA v. New York/New Jersey

 Northern Virginia owes its ‘unmatched’ connectivity to MAE-East, one of the United States’ four original Network Access Points, which was formed in 1992. With that connectivity, alongside relatively low-cost land and power (and now, tax incentives), and close proximity to vibrant labor markets, by the late 1990s Northern Virginia had become the #1 data center market, a title it has held ever since.

Right across the river from the financial epicenter of the world, New Jersey has been the go-to location for Wall Street firms’ data centers. Locating in Manhattan itself was prohibitively expensive from both a real estate cost and a power cost perspective, but until fairly recently network connectivity was not yet so robust that Wall Street firms could locate their data centers anywhere. They had to be close, so New Jersey it was.

That arrangement continues to persist, but now Wall Street firms have more options. Network performance has improved so much it’s no longer necessary for the majority of applications to be in close geographic proximity to Wall Street. The New York Stock Exchange proved that to be true in late 2020 when for four days it operated with its production servers in Chicago rather than New Jersey.

The NYSE’s test was a response to a proposed tax on high-volume electronic trading in New Jersey. Nasdaq’s response to the tax proposal was a visit to Texas and meetings with the Lone Star governor to discuss possible incentives for a move to Dallas. The New Jersey governor has since backed away from his tax proposal, but the point has been made: even Wall Street firms that depend on extremely low latency no longer have to be in New Jersey to get it.

Benchmarking on flexibility

Flexibility is perhaps even more important than TCO to an enterprise IT infrastructure leader’s ability to compete with the cloud. The cloud enables enterprises to “pay by the drink” (pay for just the capacity they use) and increase (and decrease) capacity on-demand. That kind of flexibility is hard to match on-premises; making standalone infrastructure investments of any magnitude very hard to get right in an uncertain world.

Indeed, the ability to right-size deployments versus trying to forecast 15 or 20 years into the future has many progressive IT infrastructure leaders working to ‘get out of the data center business’ by smartly leveraging a combination of cloud, internal data center and colocation – to drive optimal efficiency without tying up resources unnecessarily.

Benchmarking on performance

 Proximity (e.g. to corporate headquarters) is not necessary for excellent end user performance only as long as network connectivity – including interconnection options and dedicated cloud access – is strong. That’s why cloud strongholds are not always where corporate headquarters or even populations are concentrated but are almost always where connectivity is excellent.

U.S. Internet Exchanges

Source: Stream Data Centers

Benchmarking on righteousness

The rationale for cloud providers, and enterprises, to pursue ‘righteousness’ is partly about cost (greater energy efficiency, all else equal, means lower TCO). It’s also about meeting corporate responsibility commitments such as carbon reduction goals. And market perception matters, too. (For data center operators, getting called out by Greenpeace is the stuff of nightmares.)

According to the most recent data from Uptime Institute, average PUE has declined considerably since 2006, though efficiency gains have flattened out over the last decade. Cloud providers are leading the massive improvements in data center energy efficiency, as cloud data center PUEs are typically much lower than average. (Often 1.1 or lower.)

That’s why, even as demands on the data center have increased by orders of magnitude, data center energy consumption has increased much less. Data centers are becoming more efficient.

(Source: Stream Data Centers)

Given that an enterprise can achieve sustainability goals through the cloud, any on-prem or colo alternative will be benchmarked against the righteousness of the cloud. That’s why access to renewable energy is a significant consideration factor for many enterprise IT leaders when making the “where should this application live?” decision.

Not surprisingly, many of the top data center markets are in states with particularly robust renewable energy markets:

Source: Stream Data Centers

Enterprise IT infrastructure leaders – and colos, too: You’re competing with the cloud

With the rise of the cloud, enterprise IT leaders now have a good third option in the “Where should this application live?” decision making process. Uptime and security are table stakes. The pressure is on enterprise IT infrastructure leaders to house applications where TCO and performance are optimized, while remaining flexible to quickly scale up and down, and to do it righteously.

Chris Bair is Senior Vice President of Sales and Leasing at Stream Data Centers. The company builds and operates for the largest and most sophisticated enterprises and hyperscale cloud providers – 24 data centers since 1999, with 90% of our capacity leased to the Fortune 500. Connect with Chris.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Courtesy of Park Place Technologies
Courtesy of Park Place Technologies

Immersion or Direct-to-Chip: A Comparison of the Most Common Liquid Cooling Technologies

Which liquid cooling technology is right for your organization? Chris Carreiro, Chief Technology Officer at Park Place Technologies, compares the most common liquid cooling technologies...

Shutterstock, courtesy of Blueprint Supply Chain
Source: Shutterstock, courtesy of Blueprint Supply Chain
Gorodenkoff/Shutterstock.com
Source: Gorodenkoff/Shutterstock.com
Courtesy of Stream Data Centers
Courtesy of Stream Data Centers

White Papers

Dcf Opus Wp Cover 2022 04 25 10 14 50 300x235

Hybrid and Multi-Cloud IT Executive Buyer’s Guide

April 27, 2022
Opus:Interactive outlines best practices and provides checklists for assessing your hybrid and multi-cloud needs.