Average Cost of a Data Center Outage

Jan. 25, 2016
In this week’s Voice of the Industry, Daniel Draper of Emerson Network Power, explains the latest figures on the cost of data center downtime. The average cost of a data center outage in 2016 now stands at $740,357, up 38% from 2010.

In this week’s Voice of the Industry, Daniel Draper of Emerson Network Power, explains the latest figures on the cost of data center downtime as quantified by Ponemon Institute.

Time is money.  And data center downtime is A LOT of money.  That’s what the latest findings from the Ponemon Institute indicate in their most recent edition of the Cost of Data Center Outages report.

The average cost of a data center outage in 2016 now stands at $740,357, up 38% from when the report was first developed in 2010.  That’s $8,851 per minute of lost revenue, and unproductive employees (“e-mail’s down, time for some minesweeper!”).

So how did the Ponemon Institute come up with an average cost of $740,357 per unplanned outage?   Well, to get that figure, the Ponemon Institute audited 63 data centers in North America that experienced an outage.  Utilizing an activity-based costing model, they captured information about both direct and indirect costs including:

  • Damage to mission-critical data
  • Impact of downtime on organizational productivity
  • Damages to equipment and other assets
  • Cost to detect and remediate systems and core business processes
  • Legal and regulatory impact, including litigation defense cost
  • Lost confidence and trust among key stakeholders
  • Diminishment of marketplace brand and reputation

Now back to the cost of downtime.  Way back in 2010, the average cost of an outage was calculated at $505,502.  So what explains the quarter of a million dollar increase in costs?  Well, think back to 2010 and how much internet based technology we used (or didn’t use as the case will show).  In 2010, I had a Facebook account, as did 500 million others around the world, but now Facebook has 1.5 Billion profiles.  2010 was the year the first iPad came out.  Cyber Monday accounted for less than a billion dollars in sales.  Today, over $2 Billion of commerce happens online on just that one day.  Cable cord cutters are growing and streaming media is quickly becoming mainstream in households all across the country.

More and more commerce and communication is happening through the web each day, and the importance of networks and data centers is higher than ever before.  So what can we do to make sure data center owners and operators aren’t losing money (and more importantly, creating unhappy customers)?  Well, let take a look at the root causes of these outages from the audited facilities:

UPS system failure (which includes batteries), cyber attacks and the dreaded “human error” account for 70% of the outages.  Most all of these outages were completely preventable and in many cases, the cost to prevent the problem was insignificant compared to the direct and indirect cost of the outage.

Generally speaking, here are some of the most basic tips to keep downtime from bringing you down:

  1. Monitor UPS Batteries – Batteries are the weak link in the UPS system. Use remote battery monitoring to identify battery problems before they impact operations.
  2. Use Intelligent Thermal controls with Cooling Units – These controls improve protection by monitoring component data points, providing unit-to-unit communications, matching airflow and capacity to room loads, automating self-healing routines, providing faster restarts and preventing hot/cold air mixing during low load conditions.
  3. Perform Preventive Maintenance – An increase in the number of annual preventive maintenance visits correlates directly with an increase in UPS MTBF. Going from zero to one preventive maintenance visit a year creates a 10x improvement; going from zero to two visits a year creates a 23x improvement.
  4. Strengthen Policies and Training – Make sure the EPO button is clearly labeled and shielded from accidental shut off. Document and communicate policies and conduct regular training.
  5. Standardize and Automate Security Management – Use console servers to provide secure, remote access to servers to simplify patch management and provide early detection of attacks.

Cost of Downtime is a popular number and a useful metric to have when making the case for additional resources (human and equipment) to make sure your facility is always on, but it’s not the only metric that IT and Facility professionals should think about.

In the coming months, the Ponemon Institute will be releasing four additional reports as part of the Data Center Performance Benchmark Series, covering the issues of security, productivity, speed-of-deployment and cost-to-support compute capacity.

Submitted by Daniel Draper, Director of Marketing Programs for Emerson Network Power.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Optimizing AI Infrastructure: The Critical Role of Liquid Cooling

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...

AI-Driven Data Centers: Revolutionizing Decarbonization Strategies

AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Bending the Energy Curve: Decoupling Digitalization Trends from Data Center Energy Growth

After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...

AI Reference Designs to Enable Adoption: A Collaboration Between Schneider Electric and NVIDIA

Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Neirfy/Shutterstock.com
Source: Neirfy/Shutterstock.com

The Renewable Energy Transition: Considerations for Today’s Data Center

Wesco’s Alan Farrimond discusses the obstacles data centers face transitioning to renewable energy and ways to meet power demands and sustainability goals.

White Papers

Chatsworth Cover 2023 08 07 11 57 53

The Data Center Innovation Will Change the Way You Think About Liquid Cooling

Aug. 7, 2023
The demand for high density servers and high-performance computing continues to grow – as does the amount of heat generated by all this computing power. Data center operators ...