Executive Insights: Randy Rowland of Cyxtera

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Randy Rowland of Cyxtera. […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Randy Rowland of Cyxtera.

Randy Rowland, President of Data Center Services at Cyxtera

Randy Rowland leads Cyxtera’s global data center services business, responsible for all aspects including strategy, business management, operations, product development and evangelism. Since February 2013, Rowland has served as a Partner of Medina Capital, identifying companies with disruptive technologies and providing the portfolio with guidance on strategic initiatives. From December 2011 to February 2013, Rowland served as Senior Director of Cloud Platform Services at VMware where he led key initiatives to evolve VMware’s “as a Service” strategy focused primarily on Cloud Foundry and vCloud Services. From 2007 through August 2011, Rowland was a Senior Vice President of Product Development of Terremark, guiding the company into the emerging cloud computing industry by leading the development and launch of Terremark’s Enterprise Cloud service offering. From August 1999 until the time it was acquired by Terremark in 2007, Mr. Rowland was responsible for Product Development and Sales Engineering at Data Return. Rowland received his Bachelor’s degree in Industrial Distribution from Texas A&M University.

Here’s the full text of Randy Rowland’s insights from our Executive Roundtable:

Data Center Frontier: The long-predicted migration of enterprise IT workloads into third-party data centers appears to be gaining momentum. A key argument for keeping data and applications on-premises has been security. With the ongoing series of corporate data compromises, can service provider facilities – whether cloud or colocation – now make the case that they are more secure than the on-premises data center?

Randy Rowland: Data center security is of utmost importance. Colocation providers can absolutely make the case for enhanced security over on-premises data centers. But when it comes to security, picking the right colocation provider is about more than physical security certifications and compliance audits. There’s a much greater risk that bad actors will compromise your colocation facilities via its network. To effectively defend against both physical and virtual threats, a holistic approach to IT infrastructure security is needed – one that is defined around users, their roles in the organization, and the resources they need to access to perform their job. By looking at your data center security holistically, ensuring your operations and cyber security team are working as one, you’ll be better placed to secure data against threats.

Data Center Frontier: Cloud platforms are seeking to expand at an accelerated rate. What has this meant to the data center supply chain and construction practices? What are the areas where data center developers can innovate to further improve the speed of deployment (and help address the capacity planning challenge)? Or have we reached our “speed limit” for data center deployment?

Randy Rowland: Cloud platforms expanding at an accelerated rate can be subdivided into two categories; hyperscale and all others. While hyperscale cloud providers are working with a variety of developers to construct specific purpose facilities calling for dozens of megawatts and 5- or 6-digit square footages, all other cloud providers are shopping the colocation market for far smaller swaths of capacity with a need to grow incrementally on-demand.

Colocation capacity is not designed for a specific purpose, but instead embraces flexibility to ensure any potential customer’s requirements can be met. In a colocation environment, the installed infrastructure is shared and SLAs are established to satisfy a wide variety of all potential tenants. Colocation’s flexibility and customer segment yield higher capital and operational expenses compared to the specific purpose instance.

Time to construct is largely predicated on infrastructure availability. Generators and switchgear are not typically sitting in a warehouse ready for installation, but instead must be fabricated upon request. Colocation providers forecast capacity exhaustion and plan future builds to ensure capacity remains available to satisfy tenant growth. Hyperscale cloud providers must forecast their capacity requirements and work with their developers to ensure just-in-time infrastructure can be delivered to ensure capacity can be built and made available to satisfy growth as needed. In both cases, the speed of deployment is not relevant if forecasting and planning is properly conducted dictating how far in advance the build must begin.

The innovation lies in the cloud provider’s ability to forecast their capacity requirements in sufficient time for the developer to arrange just-in-time infrastructure production. Colocation providers do this routinely, some better than others, just as some developers are better suited to coordinate just-in-time infrastructure fabrication, delivery, and installation.

Colocation allows businesses to shift staff and resources from data center management to focus on IT infrastructure and business priorities. (Photo: Cyxtera)

Perhaps the real difference is the associated CapEx and OpEx. The hyperscale provider enjoying lower CapEx and OpEx cost models will compete well against all other cloud providers suffering the higher colocation cost model.

What’s needed is colocation that leverages an on-demand software-powered architecture; otherwise colocation providers will face a speed limit. Software-defined point, click, provision colocation takes the time to deploy dedicated IT environments from three to six months down to a matter of days. This includes everything from hardware design and procurement, colocation space selection and build out, network provisioning, hyperconverged infrastructure and hypervisor installation.

As decisions are made as to which of your applications belong in the public cloud and which are best operated out of a colocation data center, you shouldn’t have to compromise agility, flexibility or control of your infrastructure just to get the speed of cloud.

Data Center Frontier: The EU’s General Data Protection Regulation (GDPR) has finally arrived. What have been the most significant issues in the transition for data center providers and their customers? Are we likely to see similar privacy regulation in the U.S., and if so, what are the implications for the data center industry?

Randy Rowland:  While too early to determine the exact impact of GDPR on data center providers and their customers, we have certainly seen that providers must take account of the methods and extent of their data collection practices. This will allow data center service providers and their customers to determine where the collection and transfer of Personal Data (as defined in the GDPR) may require remedial action on their part to comply with GDPR.

Data Center Frontier: All our “Things” are getting smarter and more powerful, bringing the computing power of the cloud into devices in our pockets. As more workloads begin to move to the edge of the network, how will it change how – and where – we deploy infrastructure?

Randy Rowland: The demand for workloads to be as close to the edge of the network is increasing capacity demand of the core data center as well as for storage and data analytics. While the change may be where the edge is now – with users on their devices – but the data center resources are still needed to handle data wherever it may be generated. And this data still needs to be routed back to core facilities in many cases.

CDN, content and private cloud providers are coping with the change in edge computing by deploying instances in more data centers across a broader geography to place services closer to the end users. It helps to reduce latency and long haul network traffic.