Welcome to our third Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In our First Quarter 2016 roundtable, we will examine four topics: trends in cloud computing for 2016, the impact of hybrid cloud on data center providers, the state of the market for DCIM (Data Center Infrastructure Management), and the outlook for mergers and consolidation in the data center sector.
Here’s a look at our distinguished panel:
- Chris Sharp, the chief technology officer of Digital Realty, the world’s largest provider of data centers and technical real estate. Chris has deep knowledge of the data center sector from his 20 years of experience, including leadership positions with Equinix, Qwest and MCI/Verizon.
- Chris Crosby, the founder and CEO of Compass Datacenters, a recognized visionary with more than 20 years experience in the data center space. Chris previously served as a Senior Vice President and founding member of Digital Realty Trust, as well as Proferian/GI Partners and CRG West (now Coresite).
- Jeff Klaus, General Manager of Intel Data Center Manager (DCM) Solutions. Jeff leads a global team that designs and builds Intel DCM, middleware that is integrated into Data Center Infrastructure Management (DCIM) consoles to increase data center power and thermal efficiency.
- Michael Custer, President and CEO of Chatsworth Products (CPI). Michael oversees the direction and management of CPI globally, and is responsible for ensuring the company continues its growth in the global IT market.
- Mark Wachtmann serves as Vice President Product and Technology for IO and is responsible for the overall performance of the network and cloud services product lines. He has over 20 years’ experience of demonstrated success leading business and technology strategy, planning and execution for start-ups and established companies with global footprints.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Each day this week we will present a Q&A with these executives on one of our key topics. We begin our discussion by looking at trends in cloud computing for 2016:
Data Center Frontier: The rise of cloud computing continues to boost business for data center providers. What are the trends you are seeing in cloud adoption, and how are they shaping the role of the data center service provider?
Chris Crosby: Obviously the rate of cloud adoption continues to increase dramatically, but I think we need to look at this from both the perspective of the enterprise, as well as the providers themselves.
For the enterprise, more than anything else, the cloud means choice. Historically for IT, how to support corporate applications embodied the old adage of “when all you have is a hammer, everything looks like a nail.” All applications were supported within one or more of the organization’s data centers regardless if they were their own or within an MTDC (multi-tenant data center) facility.
The cloud has changed that. Now CIO’s have the ability to match the application to the appropriate platform. For example, I can use one or more AaaS providers – Salesforce.com for example – to deliver applications the company needs from the provider themselves. Shared cloud offerings provide me with the ability to support applications with uncertain bandwidth and storage requirements, like a new product offering or promotion, without the need to add new equipment in my own facility while still being able to obtain the information required to evaluate performance. Bare metal or dedicated/private cloud offerings provide me with yet another alternative for applications where I need better performance and scale, but it’s still below what would be required if we were supporting them ourselves.
In effect, the cloud provides the CIO with a “menu” of service offerings to cost effectively provide the company’s applications support. Of course, this means that the CIO, and his organization, must fulfill a more strategic role within the company, as they are responsible for evaluating requirements along a continuum of multiple factors including areas like: regulation and risk, scale, control and expertise. IT will need to balance the needs of the organization, and the application, against where they reside along this continuum to identify the most efficient platform to use. Naturally, this will be a continuous process since applications requirements will be constantly evolving.[clickToTweet tweet=”Compass CEO Chris Crosby: For the enterprise, more than anything else, the cloud means choice.” quote=”Compass CEO Chris Crosby: For the enterprise, more than anything else, the cloud means choice.”]
One byproduct that we are seeing from the incorporation of one or more cloud-based applications within the business is that CIOs need to reevaluate their network structures. Public Internet access to the cloud (SaaS, public cloud and bare metal cloud) in many instances just does not provide the bandwidth necessary to support these applications. We’ve sat with a number of customers, and when they describe what they want to do and the network structure they currently have, it is immediately apparent that not only will the existing network not support what they are attempting to do, it costs them orders of magnitude more in actual costs and quality of service than a direct cloud connection would. In short, they are paying more for inferior service. Network is going to continue to be a huge issue for CIOs as they migrate more applications to the cloud.
From the provider perspective we’ve noticed a subtle shift in what companies like ours are being asked to provide. The ability to scale quickly is obvious and the ability to support higher levels of density is becoming a “table stakes” requirement. Interestingly more and more major cloud providers are looking to move away from an N, or N+1, configuration to full-blown 2N architectures. In terms of reliability and the costs associated with an outage, they’ve found that it is actually more cost effective for the site to ensure that the site has the higher degree of reliability provided by 2N configurations. Additionally, 2N is the simplest for the operator to operate. Reducing to common would solve for the human error problem to keep the sites running. When $80,000,000 of IT and network gear is running in the data center, trying to save $800,000 by value engineering a 2N UPS system and going to N+1 with a single point of failure, is chasing pennies in a world of pounds! Especially when one looks at the mismatch between equipment lifespans of MEP versus IT gear.
Chris Sharp: Interconnection and the ability to expand quickly are the two biggest customer requests we’re hearing today, validating that the data center is no longer just a white floor where organizations keep their servers.
The data center, which houses the cloud, is quickly emerging as a hub for cloud connectivity, allowing an organization’s community – made up of customers, partners and employees – to efficiently and quickly exchange information.
Interconnecting key ecosystems in a seamless, dynamic manner is critical, especially when paired with the need to support varying customer demands from a single cabinet to multiple megawatts globally.
This need is driving a sharp demand for connected data center campuses, which are unique ecosystems of open solutions that allow customers to easily access the cloud services they rely on and to quickly scale without breaking the budget. By liberating information collected on servers, organizations can leverage and monetize that data to fuel corporate strategy and customer growth.
Jeff Klaus: As the cloud becomes a permanent fixture in an organization to solve problems, IT is becoming more of a solution enabler instead of the infrastructure provider role they have enjoyed. IT is required to better understand and address the rest of the organization’s needs like security elements, usage of cloud service API, analytics, security and authentication broker capabilities.
This shift and the continued progression of moving assets to data center service providers points to a longer range discussion around where IT is going – if they will retain infrastructure stewardship as a responsibility, or if the role eventually transforms into more of a broker of solutions, utilizing the Software Defined Infrastructure (SDI) as a way to provide results to a business problem.
Another area of great change is in cloud analytics. Numerous products and elements continue to pour into this space as IT and business groups need to make intelligent choices around what to put where, who has access, and how to manage its growth. That in turn feeds into that consulting role that I mentioned previously, that IT will have to take on.
Mark Wachtmann: Cloud adoption has increased in the past several years and I fully expect this trend to continue. While this uptake has affected the demographics of the data center customers, it has not impacted the overall growth of the industry. Data center service providers are seeing a shift in their customer base, from enterprises to cloud and technology service providers.
As demand for cloud services increases, cloud providers must find a data center partner that can house their unique requirements. Whether it is an enterprise or a cloud provider, they all need a data center to house their network, servers and storage.
Michael Custer: The attractive economics offered by cloud providers would not seem to be going away soon, most certainly as it pertains to non-proprietary home/office applications.
The usual maturing market decisions loom for data center operators: go-big, partner, and/or find your niche.
NEXT: The Impact of Hybrid Cloud on Third-Party Data Centers
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below: