Infrastructure Availability Zones: Increase Resiliency as You Scale

Feb. 11, 2019
The cloud has brought a new way of looking at scaling capacity and resiliency through availability zones (AZ). Cyxtera’s Randy Rowland explores why, when and how to apply the concept of availability zones to dedicated infrastructure in colocation.

Randy Rowland, President of Data Center Services at Cyxtera, explores why, when and how to apply the concept of availability zones to dedicated infrastructure in colocation.

Randy Rowland, President of Data Center Services at Cyxtera

The cloud has brought a new way of looking at scaling capacity and resiliency through availability zones (AZ). Cloud AZs are created when cloud providers deploy separate hyperscale node instances in multiple locations within a metro area. Customers of the cloud provider are then able to deploy their workloads across these multiple node instances, mitigating single points of failure.

Dedicated infrastructure, on the other hand, has traditionally been deployed in a single location to help reduce complexity and maximize operational control. For example, infrastructure housed in one physical data center location increases operational staff efficiency and makes shipping and receiving more convenient. This approach, while delivering some benefits, lacks the scale and resiliency of cloud availability zones that many workloads require. The natural next step would be cloud migration, but as more businesses realize that cloud is not a fit for every workload, they’re turning to dedicated infrastructure to support increased demand. To provide comparable resiliency and scale, organizations should apply a similar availability zone approach to their dedicated infrastructure deployments.

Implementing Dedicated Infrastructure Availability Zones

The first step is to decide which workloads would benefit most from dedicated infrastructure AZs. Workloads such as mission-critical internal or external applications with high availability requirements are often a good place to start. These could include point-of-sale, trading or back office manufacturing operations systems. Applications experiencing continuous growth in capacity demand or user/transaction volume are also a logical choice.  Selected workloads should also use a hypervisor, architecture or software that can support multi-site high-availability configurations in order to benefit from dedicated infrastructure AZs.

Next, consider when the right time is to implement AZs for these workloads. Migrations, tech refreshes or new workload deployments offer the best opportunity. For existing workloads, organizations should incorporate AZs as they plan migrations from an on-premises data center to a new colocation footprint. Tech refreshes or deployments of significant capacity for existing workloads already in colocation present an ideal opportunity to make AZs part of the application architecture. For new workloads, planning well in advance of initial deployments enables businesses to achieve the benefits of cloud AZs for their dedicated infrastructure footprint.

The Role of the Colo Provider

To enable dedicated infrastructure AZs, organizations should look for providers that offer at least three or more physically separated data center facilities in their desired metro region. In addition to metro area facility breadth and density, the provider should also offer low latency network connectivity between facilities in the AZ.  While industry average latency is typically 10-15 milliseconds, 5 milliseconds is needed to ensure resources distributed across different facilities in the AZ operate as if they were in a single cabinet.

It’s also essential that the colocation provider delivers the connectivity ecosystem needed to access both local and long-haul network service providers. Without this, the AZ’s benefits can be limited just as it would be for cloud.

The time it takes to the provider to provision colocation is another key consideration. Typical deployments can take three to six months – simply too long when users expect agility. Colocation compute and connectivity should be cloud-like – easy to provision and scale. Look for providers that can offer on-demand provisioning of connectivity and pre-configured dedicated compute infrastructure to shorten deployment to as little as a single business day.

Finally, make sure that the provider’s operations staff is seasoned. Experienced ‘remote hands’ staff can mitigate onsite efficiency concerns raised when colocation deployments are distributed across multiple facilities.

To enable dedicated infrastructure availability zones, look for providers with metro area facility breadth and depth that offer low latency network connectivity across different facilities.

More organizations are turning to data center providers to support mission-critical workloads not fit for cloud. But they shouldn’t have to compromise reliability. Businesses can still leverage cloud-like practices such as AZs to increase resiliency and scale. Innovations such as on-demand provisioning, low latency metro region connections and the right connectivity ecosystem make it even easier to mitigate single points of failure for dedicated infrastructure in colocation.

Randy Rowland is the President of Data Center Services at Cyxtera

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Optimizing AI Infrastructure: The Critical Role of Liquid Cooling

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...

AI-Driven Data Centers: Revolutionizing Decarbonization Strategies

AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Bending the Energy Curve: Decoupling Digitalization Trends from Data Center Energy Growth

After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...

AI Reference Designs to Enable Adoption: A Collaboration Between Schneider Electric and NVIDIA

Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Courtesy of Stream Data Centers
Image courtesy of Stream Data Centers

The Rise of the “Fake” Data Center Developer — And How to Tell the Difference

Stream Data Centers’ Co-Managing Partners expand on the problem of “fake” data center developers and explain how investors and end users can separate the wheat from the chaff....

White Papers

Get the full report

Delivering More Sustainable Mission-Critical Generators

July 14, 2022
Kohler outlines the strides manufacturers are making when it comes to improving the environmental performance of diesel generators.