How to Calculate Data Center Downtime

Oct. 26, 2015
There are a range of variables that can contribute to the cost of downtime. The price will change depending on the business and what factors impact its operations. Rob McClary of FORTRUST provides insights into the downtime equation

In this week’s edition of Voices of the Industry, Robert McClary, SVP and General Manager of FORTRUST explains how a range of variables can contribute to the cost of data center downtime, and the actual price may depend on the business.

Robert McClary, FORTRUST

Currently, there are several statistics floating around the industry about the cost of downtime. For instance, a frequently-cited 2013 study from the Ponemon Institute states that the price of unplanned downtime in the U.S. costs $7,900 for each minute that ticks by. Another survey from CA Technologies noted that downtime costs small businesses upwards of $55,000 per year, and more than $1 million per year for large enterprises. Gartner analyst Andrew Lerner pegged network downtime at around $5,600 per minute in 2014.

However, downtime isn’t a one-size-fits-all figure. There are a range of variables that can contribute to the cost of downtime, meaning the actual price will change depending on the business and what factors impact its operations.

But what exactly are these variables and how do they add up when it comes to the cost of downtime? Let’s take a look:

Most common causes of unplanned downtime

There are several root causes that can lead to downtime and its associated costs. The Uptime Institute reported that overall, 88 percent of unplanned downtime events were caused by human or mechanical errors and 12 percent were caused by weather.

“There are a range of variables that can contribute to the cost of data center downtime. The price will change depending on the business and what factors impact its operations.”

These rank differently when it comes to cost. An Emerson white paper found that UPS failures are perhaps the most costly, with the average event costing more than $600,000. Generator failures also proved expensive, with a price tag of over $400,000 per event.

Environmental factors

However, this is by no means an extensive list of downtime causes. An outage can also be caused by environmental factors that impact equipment performance. Humidity, for example, is a very important factor to keep in mind. Experts have found that when humidity is not kept at the right level, an outage can occur.

Temperature is also key. As noted above, heating and cooling system failure is one of the leading causes of downtime. Cooling systems in particular are especially important – if these fail and the considerable heat produced by critical computing equipment isn’t removed, these machines can easily overheat.

Natural disasters are another top cause of downtime. This is a variable that is commonly forgotten, but can have a significant impact on downtime and its cost. Disasters, like earthquakes, will depend on the location of the data center – certain areas of the nation and world are more prone to events like hurricanes, tornadoes or severe storms that can damage outside cabling, transformers or the data center building itself. Extreme heat can also demand more of cooling systems, creating a higher chance for failure. A location like Denver, Colorado, on the other hand, can offer an ideal site for a data center as it features one of the lowest risk climates, according to the Federal Emergency Management Agency’s ratings. Denver is in FEMA’s region 8, one of the lowest-risk areas in the nation.

Power outage

Downtime related to power failure can be some of the most costly events.

“One of the primary reasons power vulnerabilities are so costly for data centers is that a failure in the power infrastructure will likely result in a catastrophic, total unplanned outage,” stated an Emerson Network Power white paper. “This means that in addition to any direct costs incurred to remedy the cause of the outage, indirect and opportunity costs also will be significant due to the fact that all stakeholders will be affected by the outage.”

Emerson noted further that although the vast majority of data center facilities have redundant power systems in place, these can fail as well, especially if they’ve been in use for longer periods of time. Costs can be considerably driven up when both sources of energy fail.

A mistake on the part of a data center employee could lead to costly downtime.

Human Error

A number of studies cite human error as a leading cause of downtime – Robert McFarlane, data center design expert, told TechTarget that many surveys find 75 percent of downtime is rooted in mistakes made by facility staff. This is another cause that is by no means a catch-all – there are a range of variables to keep in mind here as well.

Human error can run the gamut of a small issue needing a quick fix to a large problem that leads to extended downtime. Depending on the error that took place and what systems were affected, the cost of human error can widely vary.

Duration and Frequency

Cost can also change depending on the duration and frequency of data center outages. If downtime costs a provider several thousand dollars for each minute, it goes without saying that an hour, a day or a week of downtime will cost incrementally more. In addition, a data center that is prone to frequent outages will have higher downtime costs than one with more reliable uptime.

Intangible costs: Brand Reputation

McFarlane noted that one of the most overlooked downtime costs is that associated with brand reputation and corporate image. Customers may not look on the provider as favorably if they have a bad track record of supporting their technological needs. Frequent or widespread outages may also cost a business when it comes to prospective clients.

Planned Downtime

It is important to remember also that not all downtime that takes place at a facility is unplanned. If data center managers need to perform maintenance or add new machines into their infrastructure, they may need to take certain operations offline. This is known as planned downtime, and is typically built into the facility’s schedule. Often with planned downtime, redundant processes take over to ensure that clients still have access to their critical resources – but not always. When necessary and done properly, this allows the provider to carry out maintenance without interrupting critical services. Still, a premium data center provider should be able to perform upkeep without any downtime being necessary, planned or unplanned.

There are a range of variables to keep in mind when calculating downtime cost. The price of an outage will be different for every business, but mitigating these factors can help boost reliability and improve access for clients.

Robert D. McClary is Senior Vice President and General Manager, responsible for the overall supervision of business operations, high-profile construction and strategic technical direction at FORTRUST.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Anggalih Prasetya/Shutterstock.com
Source: Anggalih Prasetya/Shutterstock.com

AI in the Data Center: Building Partnerships for Success

Wesco’s Alan Farrimond explains how the right partnerships can help data centers overcome the barriers to growth and meet the demands of AI.

White Papers

Get the Full Report

Using Simulation to Validate Cooling Design

April 21, 2022
Kao Data’s UK data center is designed to sustainably support high performance computing and intensive artificial intelligence. Future Facilities explores how CFD can validated...