Executive Insights: Chris Crosby of Compass

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Chris Crosby of Compass […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Chris Crosby of Compass Datacenters.

Chris Crosby, CEO, Compass Datacenters

Chris Crosby is a recognized visionary and leader in the datacenter space and the founder and CEO of Compass Datacenters. Chris has over 20 years of technology experience and over 15 years of real estate and investment experience. Previously, Chris served as a Senior Vice President and founding member of Digital Realty Trust. During his tenure at Digital Realty, he held senior roles with global responsibilities for sales, marketing, design, construction, technical operations and customer service, as well as establishing the company’s operating presence in Asia and Europe. Prior to the initial public offering of Digital Realty, Chris was founder and managing director of Proferian, an operating partner for the GI Partners portfolio, which was rolled into the IPO for Digital Realty Trust. Prior to Proferian, Chris served as a consultant for CRG West, now Coresite. Crosby received a B.S. degree in Computer Sciences from the University of Texas at Austin.

Here’s the full text of Chris Crosby’s insights from our Executive Roundtable:

Data Center Frontier: The rise of cloud computing continues to boost business for data center providers. What are the trends you are seeing in cloud adoption, and how are they shaping the role of the data center service provider?

Chris Crosby: Obviously the rate of cloud adoption continues to increase dramatically, but I think we need to look at this from both the perspective of the enterprise, as well as the providers themselves.

For the enterprise, more than anything else, the cloud means choice. Historically for IT, how to support corporate applications embodied the old adage of “when all you have is a hammer, everything looks like a nail.” All applications were supported within one or more of the organization’s data centers regardless if they were their own or within an MTDC (multi-tenant data center) facility.

The cloud has changed that. Now CIO’s have the ability to match the application to the appropriate platform. For example, I can use one or more AaaS providers – Salesforce.com for example – to deliver applications the company needs from the provider themselves. Shared cloud offerings provide me with the ability to support applications with uncertain bandwidth and storage requirements, like a new product offering or promotion, without the need to add new equipment in my own facility while still being able to obtain the information required to evaluate performance. Bare metal or dedicated/private cloud offerings provide me with another yet another alternative for applications where I need better performance and scale, but it’s still below what would be required if we were supporting them ourselves.

In effect, the cloud provides the CIO with a “menu” of service offerings to cost effectively provide the company’s applications support. Of course, this means that the CIO, and his organization, must become fulfill a more strategic role within the company, as they are responsible for evaluating requirements along a continuum of multiple factors including areas like: regulation and risk, scale, control and expertise. IT will need to balance the needs of the organization, and the application, against where they reside along this continuum to identify the most efficient platform to use. Naturally, this will be a continuous process since applications requirements will be constantly evolving.[clickToTweet tweet=”Compass CEO Chris Crosby: For the enterprise, more than anything else, the cloud means choice.” quote=”Compass CEO Chris Crosby: For the enterprise, more than anything else, the cloud means choice.”]

Of course, outsourcing decisions need to weigh the four business lenses of cost, time, risk and benefit. For example, developing an application using the public cloud can be significantly less expensive than trying to do so in house (low cost, short time, medium risk and great benefit to the developing team). That said, a stable platform application (limited variability in resource usage) on the variable public cloud is likely to be much more expensive than one delivered on the in-house platform (a ported, stable public cloud app into in-house app provides a low cost, short timeframe, low risk, high benefit).

One byproduct that we are seeing from the incorporation of one or more cloud-based applications within the business is that CIOs need to reevaluate their network structures. Public Internet access to the cloud (SaaS, public cloud and bare metal cloud) in many instances just does not provide the bandwidth necessary to support these applications. We’ve sat with a number of customers, and when they describe what they want to do and the network structure they currently have, it is immediately apparent that not only will the existing network not support what they are attempting to do, it costs them orders of magnitude more in actual costs and quality of service than a direct cloud connection would. In short, they are paying more for inferior service. Network is going to continue to be a huge issue for CIOs as they migrate more applications to the cloud.

From the provider perspective we’ve noticed a subtle shift in what companies like ours are being asked to provide. The ability to scale quickly is obvious and the ability to support higher levels of density is becoming a “table stakes” requirement. Interestingly more and more major cloud providers are looking to move away from an N, or N+1, configuration to full-blown 2N architectures. In terms of reliability and the costs associated with an outage, they’ve found that it is actually more cost effective for the site to ensure that the site has the higher degree of reliability provided by 2N configurations. Additionally, 2N is the simplest for the operator to operate. Reducing to common would solve for the human error problem to keep the sites running. When $80,000,000 of IT and network gear is running in the data center, trying to save $800,000 by value engineering a 2N UPS system and going to N+1 with a single point of failure, is chasing pennies in a world of pounds! Especially when one looks at the mismatch between equipment lifespans of MEP versus IT gear.

A Compass Datacenters facility as it will look with its new design, which seeks to offer more flexible sizing options for fast-growing cloud computing providers. (Photo: Compass Datacenters)

Data Center Frontier: The hybrid cloud deployment model appears to be gaining traction with enterprises. How is the emergence of hybrid cloud impacting data center providers’ ability to attract enterprise business, which has historically focused on the need to shift IT workloads out of on-premise facilities?

Chris Crosby:  Let’s start with this fact, for many organizations private cloud is actually on an in-house platform. Secondly, private cloud also includes a bare metal approach of dedicated hardware in many cases. Due to all the definitions of hybrid cloud out there, it is clear that on-premise chews up a big part of the private side of the hybrid cloud. In fact, in many respects I think it actually helps organizations in their search for a data center since the availability of alternative cloud-based platforms enables them to be more specific in defining their requirements.

Since cloud offerings enable them to “outsource” applications with less “restrictive” parameters (security for example), things like capacity planning, determining power requirements and alike have become a bit more manageable than in the past. Security remains a key issue as it relates to the cloud in the minds of CIOs and the world of regulations, and no one is ready to risk the company’s crown jewels via all cloud solution. As a result, the number of applications that are deemed to be “too sensitive” for the cloud are clearly identified as those that need to remain in the corporate facility. This reduces some of the variables (How do I plan for applications that are unknown at this point?) that were previously part of estimating future capacity needs. We find that the companies we talk to have a better grasp of what their requirements really are and don’t want to “overbuy” as a hedge against future capacity needs. Correspondingly, they want to be able to respond quickly when they do determine they need more space and power and we are seeing them place a larger premium on the speed of delivery and the ability to pay as they go in terms of expansion instead of taking on more capacity that they needed to ensure that it was available as has been traditionally defined the acquisition process.[clickToTweet tweet=”Chris Crosby: No one is ready to risk the company’s crown jewels via all cloud solution. ” quote=”Chris Crosby: No one is ready to risk the company’s crown jewels via all cloud solution. “]

Data Center Frontier: It’s been an active period in the market for data center infrastructure management (DCIM) software. What’s your take on the state of the DCIM market, and pace of adoption of these tools by end users?

Chris Crosby: Even though it seems like we’ve been talking about it seemingly forever, I think DCIM is still very much in its infancy. Despite that, an increasing numbers of data center operators are investing in DCIM solutions. I don’t think it is as effective helping them manage and control their operations as they’d hoped. We see more home grown solutions than true deployments of a full third party system. As an analogy, DCIM implementation is a lot like an SAP implementation. It is NOT software off the shelf that you just download and use.

I think this lack of effectiveness is due to both deficiencies in the products themselves as well as how they are being used by data center operations personnel. From a product perspective, the majority of the offerings are still too proprietary. If you are buying a DCIM solution as a precursor to ultimately implementing SDN in your facility, you want to have the flexibility to integrate the solution of your choice. In many instances, capabilities like this aren’t available or the end user’s options are limited.

This is a problem that is similar to one the network hardware vendors went through in the 1990’s where each vendor had their own proprietary management scheme and it wasn’t until an open platform (HP OpenView) emerged that the required degree of interoperability that end users required could be built into these solutions. Until such a standard becomes available, this is going to be a limitation of DCIM solutions.

The other product-related issue is that many of these offerings offer a wide range of functionality, but none of them do the basics very well. The products focus on the wants and not the needs of the business. When software is developed by people focusing on wants you end up with a camel, when what you really needed was a horse. This is analogous to most business phone systems: while you can do 30 or so different things from the set on your desk, you only use 3-4 of them on a regular basis, so you want those to be easy to use. A number of DCIM offerings are similar in that their list of features and capabilities can fill up an entire spec sheet, but they don’t do the basics well enough for the end user to successfully use their higher level functionality.

The interior of a Compass Datacenters Mpower modular power room. (Photo: Compass Datacenters)

Unfortunately, I think the other inhibitor to successful DCIM implementation is the end users themselves. In some respects I think DCIM is viewed as an immediate panacea to enabling them to “get control” of their facilities so the entire project begins with an incorrect assumption. If you don’t already have an accurate inventory of what’s in your data center, implementing DCIM isn’t going to magically fix that. Before they even initiate a conversation with a DCIM provider, end users need to have a very clear understanding of what the immediate issues are that they want to address, coupled with a clear plan for how they will move forward from there. I don’t think a DCIM implementation is any different than any other data center related project. If you don’t really understand what you want to accomplish, even the best product is not going to do what you need.

The real hot button for the industry will be if Intel can get the chip based DCIM ready for primetime. With server level feedback from a power and cooling perspective, this data will highlight in big bright lights how many groups have a disconnect between IT and the data center it resides within. I think that we can trace this back to the late 2000s focus on energy. The MEP guys went one way and the IT guys went another. Neither seemed to recognize that the work that one was doing for efficiency created more issues for the other.

For example, a focus on PUE from MEP coupled with variability on the power draw and fan speeds for the IT gear means that at most data centers, the user may feel that they are more efficient when in fact they are NOT! How can that be? If I lowered the mechanical cooling to the point that the inlet temperature was higher or the total cooling capacity lower than before, my PUE will drop. The problem is that the fans on the servers have now ramped up, so I’ve increase the left side of the PUE decimal point. Let’s say total load was 1,000kW of IT load with a PUE of 1.40. The MEP team’s PUE project dropped the PUE by 0.05 to 1.35 from 1.40. To adjust for the increase in inlet temperature, my fan speeds now crank up as a result of the change, my IT load switches to 37kW to 1,037kW. The irony is that I am at the same total load as I was before the project (1,400kW old versus 1399.9kW new), but I have now just decreased the useful life of my servers by pushing on them to increase fan speed!

Data Center Frontier: There’s been lots of mergers and acquisitions in the data center industry. Is this likely to continue? If so, how might M&A activity impact the competitive landscape in the data center and cloud hosting industry in 2016 and beyond?

Chris Crosby: Consolidation is a natural evolutionary process within any industry so I think the fact that we will see more is undeniable. What will be more interesting is the reason behind the acquisitions. For example, Equinix has just finished the Telecity acquisiton. Equinix offers customers direct connection to multiple cloud offerings, and that is helping to fuel their growth. For them, and others like them, the ability to continue to add capacity is going to be a key issue. You don’t have to look any further than Northern Virginia to see that companies like Microsoft, Facebook, Amazon, et. al. can’t build new facilities fast enough to keep up with demand, so, in some instances capacity will drive some of the M&A activity.

I think the other driver will be expertise. If I want to enhance my cloud capabilities or add them to my repertoire, what is the easiest and fastest way that I can achieve this goal? In some instances, it will be to buy someone who is already doing it. This can be more efficient than trying to build from scratch or searching for people with the experience that you’re looking for. This would be especially important for companies that want to compete, or at least carve out a niche, against the AWS and Azures of the world. These instances will be rarer in comparison to capacity driven acquisitions but as a company like Cisco has demonstrated, it’s sometimes easier to enter into a market, or increase you share of it, by buying your way in.