Interview: Cyxtera Data Center Lead Holland Barry On Cloud's Balancing Act In Colo AI, Enterprise Workloads

Sept. 15, 2023
Cyxtera Field CTO Holland Barry recently spoke with Data Center Frontier's editors for a podcast discussion probing the ongoing "cloudification of colocation," reflecting Barry's expert data center perspective. An edited transcript of that talk follows.

Data Center Frontier editors Matt Vincent and David Chernicoff recently had the chance to sit down with Holland Barry, Field CTO for Cyxtera, for a podcast discussion probing the ongoing "cloudification of colocation," as reckoned by DCF, while reflecting expert perspective on handling such workload processing mandates from inside the walls of a Cyxtera data center.

The discussion took the company's recent announcement of its partnership with HPE as a jumping-off point for Barry's observations and perspective on how colocation data center cloud and IT imperatives increasingly revolve around both AI and traditional enterprise workload processing and "recalibration" opportunities.

Lightly edited for length and continuity, here is the marrow of our recent podcast interview with Cyxtera Field CTO Holland Barry.

Data Center Frontier:  Could you begin by giving us some information about your background, and your role with Cyxtera?

Holland Barry:  I've been in technology for a few decades now out here in Silicon Valley. Everything from IT management to infrastructure automation to cybersecurity, and now kind of back in infrastructure-land on the data center side. My function here at Cyxtera largely surrounds the technologies that we both implement internally, and also the technologies that we deploy within the data centers themselves to make sure that the experience of being in the data center colo facility is seamless. And feels as much like cloud as it can, in terms of the provisioning of services, how we bill for things, things like that. A lot of swim lanes underneath those. But generally speaking, I'm a technologist at heart, and I just want to make sure that what we're building is what's useful for the market to consume.

DCF:  We all read what Nvidia's Jensen Huang said recently about what AI is going to do inside the footprint of existing data centers. Could you shed some light on what Cyxtera's overall vision is for supporting the rise of AI workloads in data centers?

HB:  I think generally speaking, AI at scale is very happy in data centers like ours and our peer facilities. There's a lot of benefits from a cost perspective and from a performance perspective to running AI in a colo and in a data center. Some of the things that we're working on as a company to help make that experience even better revolve around how we're implementing next generation cooling technologies, how we're augmenting our network to facilitate higher speeds, how we're being more efficient with power designs, and how we can accommodate higher density power designs for racks that are filled up with GPUs that don't look like your typical workload stack for the enterprise.

How do we contemplate adjacency as its relates to other venues where parts of the AI process are happening, whether that be edge or in the public cloud? How do we make sure that we have the proper wide area networking and the network choices so people can accommodate whatever use case and business outcome they're trying to achieve? Because we realize: AI applications aren't residing solely in our facilities. There's other components to the stack. So how do we make sure that from a network plumbing perspective and a network choice perspective, we can help tie in those other pieces of the puzzle?

DCF:  Regarding the announcement two weeks ago from Digital Realty that said they were going to be adding 70 kilowatt racks in 28 of their data centers, is Cyxtera going to make a definitive statement saying you're going to offer this type of arrangement in your facilities to your customers?

HB: There's a spectrum, right, in terms of power densities and the different ways that you can achieve those power densities? I believe that Digital announcement was largely based on a rear door heat exchanger design that did include water coming into those to achieve that. We can certainly do that today. It's something that we have been able to support for a while. I think the demand has increased quite a bit, even over the last year. Even just from an RFP perspective, we're seeing more requests for this level of design. They mentioned that they can do that in 28 markets; I think we can support that in around 18 of our markets right now. Whether or not we'll make a big public announcement, I don't know, but I think we've already been talking about the fact with our customers and our partners and even through other media outlets, that this is a capability with which we're certainly on track.

DCF:  Cloud costs are rising. How are collaborations such as the new one between Cyxtera and HPE addressing that reality in the cloud?

HB:  The 'why' behind us working with partners like HPE and building out our digital exchange platform, and having a home for at least a leg of a multi-hybrid cloud strategy, largely surrounds cost mitigation and performance enhancements. There's a certain workload profile in the public cloud when you reach a certain scale and you reach a certain performance predictability, where there's a lot of benefit to maybe doing an extraction and recalibrating that app to maybe run in a different venue like ours. The costs become extremely predictable with us. There's no such thing as an egress fee, for example, with us.

Especially those applications that have a large center of data gravity, there's just a lot of benefit to running those things with us, and we're big proponents of leveraging public cloud for seasonal workload experimentation, maybe DevOps, things like that. There's a lot of benefits to the public cloud for a certain workload profile, and we certainly leverage those venues for that. There's some really nice use cases. We personally use several public cloud providers and SaaS providers to do things that we're more than capable of utilizing our platform to do.

HPE is one of the foundations of our bare metal platform. We call it enterprise bare metal. You can rent servers from us by the month in a flat monthly opex model and really get control of those costs and have dedicated single tenant performance. That's another key, too. This isn't the multi-tenant cloud that we're offering - it very much feels like you built your own little data center. What we saved you is having to outlay the capital to procure the equipment, both the servers and the core networking, and you get to pay for it with opex dollars instead of capex. If those things are important to you, we're a great place to reach out to.

DCF:  Do you see customers moving to the HPE GreenLake solution on Cyxtera due to a need to repatriate cloud workloads now to a hybrid situation?

HB: That's one use case. I think there's another big use case too, where people are at an inflection point on doing a tech refresh, and that could be in their own data center at their campus HQ that they might be thinking about exiting. That could be in a colo environment with us, where they're thinking about maybe spending these dollars in an opex model more beneficial to the business, and it's going to make the CFO happy. So it's our job as a data center operator to present these choices and have optionality both from financing models, perspective, and technologies. And so our partnership with HPE goes a long way to scratching a lot of itches in the industry.

DCF:  Will you be a drop-in into the GreenLake console for HPE customers?

HB:  How we've partnered with HPE right now is a foundational financial vehicle for us to deploy and provision our bare metal compute nodes on our platform. We certainly have instances of GreenLake running in our data center that's just leveraging colo, and we're super happy to operate in either one of those models - again, back to that theme of optionality and choice.

DCF:  We wanted to ask you about cooling in the Cyxtera data centers, particularly in anticipation of AI developments, but also just what the cooling footprint looks like for your traditional computing. What's the ratio of liquid cooling to any other kind of cooling that Cyxtera might be employing?

HB:  We are definitely seeing an increase at large. I'm just going to put liquid cooling in a category, even though there's subdivisions, there's direct to chip, there's many different ways you can achieve that. We're seeing more about immersion and surgical immersion, but I think in general there's a theme of curiosity, and then there's actual deployments that we can reference that are leveraging water in some way, shape or form as a cooling mechanism, as opposed to air.

I kind of look at it as power range chunks, in terms of where those inflection points are of a decision to be made on how to cool down these servers. It really depends on the composition of the data center, how the air is being blown, is it being contained or not. Generally speaking, you can work your way up to probably blow enough cold air on an environment, especially utilizing concentrated containment, to keep the servers happy and cold. Sometimes it's less in certain data centers. Again, there's all these different variables that can kind of dictate how effective air can be.

But you certainly reach a ceiling with air and then you have to start thinking about liquid and maybe popping off those heat sinks and running chips, or running water loops on those chips. We talked about the rear door heat exchangers. Those can certainly get you way up into that 70, 80, 90 kilowatt range per rack in terms of the power envelope. And then you start running into some challenges with utilizing water. Again, there's tons of variables, so I won't even cite a range. But this is where people are starting to think about, when does immersion start to take over?

It's certainly these super high power envelope designs, where submerging the entire server or a part of it into liquid becomes one of your only options to properly dissipate the heat. It wasn't too long ago where taking the immersion approach, you started risking warranty issues from the OEMs. They wouldn't support it, but it seems like more and more the OEMs are coming around to supporting these things, not having it void the warranty. Or the immersion vendor will say, hey, we'll warranty it if the OEM won't. So I think there's a lot more comfort in the exploration of utilizing these types of next gen cooling technologies. We're not yet seeing it being deployed at scale, especially on the upper end of the enterprise.

Certainly the curiosity is there and people wanting to do some tire kicking is there, more so than we've ever seen, so I anticipate we're going to start seeing it in production. If I was to guess on a timeline, it's still probably in a three to four year range. But we're starting to see some very promising early adopters come into our data center leveraging this technology that's really been around for decades, which I think now people are just getting the comfort to say, yeah, let's go ahead and implement it.

DCF:  What kind of impact are you seeing on your planning stages for your future data centers relative to AI, higher power densities, that sort of thing? Not just AI, but any higher power density solution. And regarding your power purchase agreements, are you looking at trying to get ahead of the curve for buying the next generation power needs?

HB:  I'm not sure what we disclose publicly in terms of the power purchase agreements. I can tell you there is a mix though, for sure. We do think about that in certain markets, especially ones that are volatile, or ones that we perceive as volatile. In terms of the physical outfitting of data centers, especially if we were tasked to build a net new one today, certainly there would be a desire to have water pre-plumbed into them. We might have areas of slab to where we can comfortably deal with the weight loads of some of these immersion tanks. Some of them can be in many tons. Putting those on a raised floor without reinforcements can be an issue. They need to be tied down. I think those are all things that we're considering, when either we're building out a new module or retrofitting a module in an existing data center, or building out a net new one. But I think, to answer your question and not take the scenic route, it is absolutely a consideration. High performance computing and the provisions we need to have as a data center provider are top of mind as we think about these designs.

DCF:  Regarding the convergence of the cloud and hybrid IT clouds versus on-premises needs and trends, how does Cyxtera see itself fitting into addressing all the competitive demands that go along with that?

HB:  I think in general terms, we view ourselves as a piece of a multi-hybrid cloud, multivenue strategy. We don't think we're the best thing for everything. We're very upfront with our customers and prospects as we're uncovering the types of workloads that they might be contemplating putting with us. I think we have a pretty big strike zone in terms of workloads that are happy with us, but we're also happy to say, hey, we're leveraging AWS for this, we're leveraging Azure for that, we're leveraging SaaS services for CRM and other platforms, because they're really good at these very specific things.

I know people are calling it repatriation; I like to call it recalibration, because a lot of us were forced to do an overcorrection over into public cloud, especially during the pandemic. And now that the costs usually aren't in line with what the forecasts were, and the CFOs might have some questions about the infrastructure operators, why are these bills so high? So we're having a lot of cost-centric conversations about how can we surgically pull back those workloads, those data sets that are going to have a better performance profile and certainly more predictable cost profile in the data center.

I think just in general, with how people have modularized their apps, and different ways people have made the act of running workloads in different venues easier from a networking perspective, from a VM or container or compute management perspective, people are getting a lot more comfortable saying, let's leverage the strengths of these different venues, and not have it to be one or the other, or just all one.

DCF:  Can you expand on how you see your customers running into issues with egress fees and that type of expense as they, as you put it, "recalibrate" their workloads?

HB:  Egress is one, and I think it's an easy one to pick on just because it's an easy one to understand. Also, I think there's been some high profile cases of people getting surprised that can manifest in ways that people just didn't anticipate. There's the act of bringing the data out of a public cloud instance or storage volume, but then there's also the traversal of data going between different instance types. Some of those motions of data can also count as egress. People didn't know it, they didn't anticipate it.

Then I think, in general, especially if people had to make a quick decision to go up to public cloud, they might not have contemplated all the supporting PaaS services it takes to truly bring that application to life - load balancer fees, API fees, all these things. So it gets to be a death by 1000 cut situation when that final bill comes in.

Sometimes those final bills are hard to understand too, to see where the real problem children are. So I think it's a collection of all those things that are driving people to have, again, a very cost-centric conversation about how might we replicate the core functionality that we need back in a private cloud setting, hopefully in one of our data centers. That's really the impetus behind that recalibration exercise.

DCF:  Are you seeing a fair percentage of customers who feel they should have done this yesterday, in terms of suddenly realizing what the costs were like and the impact on their business, in terms of moving to a colocation facility?

HB:  Absolutely. Part of it is an awareness thing, understanding that we have a platform, that you can build a private cloud and not have to go spend millions of dollars of capex getting the equipment. You can delegate the management of the core network to us. So they don't have to do a lot of retraining of their teams. That's one big aspect of running in public cloud. You're largely abstracted from at least the underlay network. And when you come back to a private cloud setting, all of a sudden you need to have people who are familiar with programming switches, and thinking about network topologies and spanning tree, all these concepts you don't really have to worry about in public cloud. So how can you delegate to us, or one of our ecosystem partners, to abstract that level of complexity so you can get back to the part that matters, which is the application layer? I think there's a lot of technology choices that we're having to implement to make that act of coming back or starting with us easier, and not have it be building a whole data center from scratch.

DCF:  In the announcement of its new partnership with HPE, Cyxtera said it anticipates more of such technology collaborations. What might that portend for your company strategy going forward?

HB:  Without getting into specifics and roadmaps, it's incumbent upon us to provide choices and solutions that make coming to a venue like ours and moving workloads to a venue like ours as easy as possible. That means having choices across compute, networking, storage, and security that can help you achieve the end state of eventually getting your application live. The announcement with HPE goes a long way toward crafting that end state. If you just look at their portfolio of capabilities, everything from the physical infrastructure to the software layer to the services they can bring to the party, it allows us to solution alongside our friends at HPE with the end customer, or if there's a partner involved, to get to that end state and achieve those business outcomes that the ultimate end customer is looking to achieve. So you can think about this as kind of like a foundational relationship where there's a lot of different digital building blocks that we can snap together in creative ways to eventually build that multi-hybrid cloud reality that I think most of us are going to be living in going forward.

Ed. Note - See our original podcast article for a timeline of discussion points plus DCF's latest notes on Cyxtera's larger data center business.

About the Author

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

RachenStocker/Shutterstock.com
Source: RachenStocker/Shutterstock.com

Views from the Ground: Electrical Contractors Building Data Centers Weigh In

David Pala from IBEW Local 26 shares insights on industry changes and the benefits of being a union electrician.

White Papers

Dcf 3

Modernizing Efficiency and Time-to-Market

May 22, 2023
During an era of unprecedented digital infrastructure growth, learn how leaders embrace design modernization to impact time-to-market and sustainability.