Roundtable: Are We Nearing the Speed Limit on Data Center Delivery?

June 19, 2018
Our Executive Roundtable debates data center construction and supply chain management. In an era of just-in-time cloud demand, can we go even faster?

Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – Dana Adams of Iron Mountain, Joel Stone of RagingWire, Samir Shah of BASELAYER, Eric Ballard of Stream Data Centers, and Randy Rowland of Cyxtera – discuss the The pace of innovation in data center delivery and supply chain management.

The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.

Data Center Frontier: Cloud platforms are seeking to expand at an accelerated rate. What has this meant to the data center supply chain and construction practices? What are the areas where data center developers can innovate to further improve the speed of deployment (and help address the capacity planning challenge)? Or have we reached our “speed limit” for data center deployment?

Eric Ballard, Vice President, Network & Cloud for Stream Data Centers,

Eric Ballard, Stream Data Centers: Today cloud platforms are expanding and consuming much of the available data center space in certain data center markets and also creating new locations for data centers that had not even been on the road map. As a result, this has put pressure on the data center inventory in many locations and has required developers to accelerate their growth in these locations.

Some of the larger constraints are on the needed infrastructure for data centers that comes from suppliers. Depending on demand, these suppliers can get backlogged. Also, natural disasters in other parts of the world can stress demand for these products, and all of this has to be taken into consideration.

Data center developers continue to innovate and to optimize the supply chain, and will continue to improve how we do things. Even taking a day or two out of delivery window can be significant for a project. More innovation will also continue on modular building of data centers, where more and more infrastructure will be built and integrated offsite. Often this work that used to have to wait until one phase of a project completed to begin can be done in parallel, and could also be done inside a plant offsite that is not subject to weather days or site constraints.

These technologies and innovations will help the industry to continue to move faster and deliver in faster intervals. We continue to break the speed limit and find new ways to accelerate the design/build process.

Samir Shah, VP of Product Management, BaseLayer

Samir Shah, BASELAYER: There have been advancements in the time to deploy data center capacity. However, we have only scratched the surface in terms of speed and efficiency. When talking to individuals in the cloud industry, we often hear about their demand in terms of massive scale reaching hundreds of megawatts per site.

To improve deployment speed, cloud and data center companies need to work on forward-looking capacity planning models. This will allow for shorter overall lead times through traditional manufacturing best practices such as safety stock and inventory management.

From a construction, real estate, and permitting perspective, standardization is the key to reducing cycle time. To help local officials, adoption of standards such as UL-2755 would streamline the permitting process and improve delivery timelines.

Randy Rowland, President of Data Center Services at Cyxtera

Randy Rowland, Cyxtera: Cloud platforms expanding at an accelerated rate can be subdivided into two categories; hyperscale and all others. While hyperscale cloud providers are working with a variety of developers to construct specific purpose facilities calling for dozens of megawatts and 5- or 6-digit square footages, all other cloud providers are shopping the colocation market for far smaller swaths of capacity with a need to grow incrementally on-demand.

Colocation capacity is not designed for a specific purpose, but instead embraces flexibility to ensure any potential customer’s requirements can be met. In a colocation environment, the installed infrastructure is shared and SLAs are established to satisfy a wide variety of all potential tenants. Colocation’s flexibility and customer segment yield higher capital and operational expenses compared to the specific purpose instance.

Time to construct is largely predicated on infrastructure availability. Generators and switchgear are not typically sitting in a warehouse ready for installation, but instead must be fabricated upon request. Colocation providers forecast capacity exhaustion and plan future builds to ensure capacity remains available to satisfy tenant growth. Hyperscale cloud providers must forecast their capacity requirements and work with their developers to ensure just-in-time infrastructure can be delivered to ensure capacity can be built and made available to satisfy growth as needed. In both cases, the speed of deployment is not relevant if forecasting and planning is properly conducted dictating how far in advance the build must begin.

The innovation lies in the cloud provider’s ability to forecast their capacity requirements in sufficient time for the developer to arrange just-in-time infrastructure production. Colocation providers do this routinely, some better than others, just as some developers are better suited to coordinate just-in-time infrastructure fabrication, delivery, and installation.

Perhaps the real difference is the associated CapEx and OpEx. The hyperscale provider enjoying lower CapEx and OpEx cost models will compete well against all other cloud providers suffering the higher colocation cost model.

What’s needed is colocation that leverages an on-demand software-powered architecture; otherwise colocation providers will face a speed limit. Software-defined point, click, provision colocation takes the time to deploy dedicated IT environments from three to six months down to a matter of days. This includes everything from hardware design and procurement, colocation space selection and build out, network provisioning, hyperconverged infrastructure and hypervisor installation.

As decisions are made as to which of your applications belong in the public cloud and which are best operated out of a colocation data center, you shouldn’t have to compromise agility, flexibility or control of your infrastructure just to get the speed of cloud.

JOEL STONE, RagingWIre

Joel Stone, RagingWire: We definitely have not reached the speed limit for data center deployment. While there is a lot of data center construction underway, there are only a handful of large scale providers that have a well-developed supply chain of equipment and materials that can truly reduce time-to-market and increase return-on-invested-capital. Our experience is that the best practices for data center construction and supply chain management are:

Creating a Vigorous Company Culture: Currently, many data center supply chains are focused on expensive and long lead infrastructure equipment. These points of emphasis are joined by a comprehensive logistics management approach to ensure supply, while minimizing upfront investment. However, the process shouldn’t stop there. A vigorous company culture that repeatedly focuses on what is constraining time-to-market – and then eliminates those constraints – is where ultimate optimization occurs. This approach applies across all aspects of development, including demand forecasting, land acquisition, permitting, services, structure, equipment, building materials, contracting and compensation strategies, etc. It is a never-ending battle, and time is the enemy.

Contracting & Collaboration: The industry is already seeing supply constraints, and in some cases “allocations” from product and service suppliers. Creative and long-term contractual relationships are key to reducing “commercial churn” and focusing the combined team’s energies on improving delivery efficiencies as well as securing supply priorities and guarantees.

Developing and Managing Multiple Critical Paths: Developing a delivery approach with many parallel activities will optimize delivery. For example, we use a modular integrator to assemble electrical and mechanical infrastructure in their quality-controlled assembly environment, parallel (or prior) to the building’s structure being raised. In conventional delivery, mechanical and electrical trades cannot begin rough-in until a structure is in place. Eliminating this dependency and moving off-site with parallel activities shortens time-to-market. There are many other opportunities to do this, but success is not possible without a healthy culture and contracting approach as discussed above.

Dana Adams, Vice President and GM of Data Centers, Iron Mountain

Dana Adams, Iron Mountain: There is no doubt that speed to market is a key measure of success for any data center development. Customers have tight timeframes for deployment and solutions are constantly lagging behind internal demand drivers, which keep accelerating at an astronomical pace, especially amongst the major cloud service providers.

The data center construction industry in the US is quite mature, so the time it takes to physically construct a data center may be close to hitting its “speed limit.” That being said, there is a lot that can be done on the front end to make sure a project is set up for success before any shovel hits the ground. Of course, site selection is the first critical step, and making sure you have the power and entitlements needed. Then, choosing the right development partners and bringing them to the table early in an integrated project delivery approach allows you to analyze key decisions quickly and plan ahead.

While some customers require customization and providers need to remain flexible, standard designs that don’t require re-education of the development team also help projects move faster. Finally, supply chain management is important to secure production slots for critical data center components like generators and UPS’s. Developing at scale provides much better inroads with the major equipment suppliers, but as with anything, good relationships can also go a long way.

NEXT: The impact of the GDPR and data privacy regulation.

Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below:

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Courtesy of Park Place Technologies
Courtesy of Park Place Technologies

Immersion or Direct-to-Chip: A Comparison of the Most Common Liquid Cooling Technologies

Which liquid cooling technology is right for your organization? Chris Carreiro, Chief Technology Officer at Park Place Technologies, compares the most common liquid cooling technologies...

White Papers

Get the full report

Sustainability in Data Center Site Selection

Aug. 31, 2022
This special report, courtesy of Nautilus Data Technologies, explores the role that sustainability should play in the development of deployment of data centers.