Evolving Data Center Cooling Environmental Ranges and Standards

March 8, 2016
Are you prepared for the upcoming revisions to the ASHRAE data center cooling standards – this new report from Data Center Frontier will help you get ready.

The demand for more efficient and cost effective computing has driven organizations large and small to reevaluate their data center cooling strategies. This examination can incorporate many aspects, encompassing system architecture and software platforms, as well as the IT hardware, and of course the data center facility. Moreover, there are many strategic options for the enterprise CIO and CTO to consider, such as the possibility of direct or indirect ownership and operation of their own data center facility, as well as colocation, cloud, or hybrid combinations thereof. Nonetheless, the IT hardware must ultimately reside in a physical data center that will provide conditioned power and safe environmental conditions for the IT equipment.

Data Center Frontier Special Report on Data Center Cooling – Download it Now

This article is the first in a series taken from the Data Center Frontier Special Report on Data Center Cooling Standards (Getting Ready for Revisions to the ASHRAE Standard)

For the majority of traditional data centers the environmental operating conditions have been long been based on ASHRAE recommendations defined in the “Thermal Guidelines for Data Processing Environments”, which was first published in 2004, and has since been updated twice. With each succeeding edition, the facility environmental envelope ranges have broadened in response to the increased environmental ruggedness of newer generations of IT hardware. These broader ranges have allowed the facility operators the opportunity to improve their cooling energy efficiency. The industry reliance on these ASHRAE guidelines has allowed data center facility managers to consider increasing the operating temperatures and adjusting the humidity ranges to save energy, while considering any effects on IT equipment reliability. The 4th edition is expected to be finalized and released in 2016

This special report will examine the underlying relationship of temperature, humidity and energy usage, as well as the operational risk considerations of the expanded environmental ranges on both the facility and the IT equipment. It will also the examine the existing issues of the ASHRAE 90.1 standards, which are used by many state and local building departments, as well as discussing the potential impact of the pending ASHRAE 90.4 standard, which is now in its 3rd review for public comment and is also expected to become effective in the fall of 2016.

Overview of Data Center Cooling Practices and ASHRAE Recommendations and Standards

The majority of data centers cooling practices have been based on ASHRAE’s Technical Committee 9.9 (TC 9.9) Mission Critical Facilities, Technology Spaces, and Electronic Equipment, which published the first edition of the “Thermal Guidelines for Data Processing Environments” in 2004. TC 9.9 was developed by cooperative effort to create a unified environmental specification range for data center operators and IT equipment manufacturers. The first edition defined a relatively narrow recommended temperature range of 68°F to77°F (20-25°C), as well as 40-55% relative humidity (RH). These boundaries were based on the range of temperature and humidity limits provided by each of the different IT equipment manufacturers of that era, as well as historic operating practices of data centers containing older generation, highly sensitive, so-called legacy computing equipment. Moreover, at that time, energy efficiency was not given much consideration, the primary focus was maintaining very tightly controlled environmental conditions to minimize the risk of IT hardware failure.

In 2008 the second edition the Thermal Guidelines was released and the recommended temperature range for Class-1 facilities was broadened to 64.4-80.4°F (18-27°C). However, the humidity limits became a bit more complex and became based on a combination of dew point (DP) temperatures and relative humidity (41.9°F {5°C} DP to 60%RH and 59°F {15°C} DP). This was in response to newer generations of IT hardware’s increased environmental tolerances and the industry impetus to begin to explore ways to save cooling system energy. However, the historic reliance on the original, more conservative ASHRAE guidelines had already become a de-facto practice and imbedded memory. It is also important to note that while the 2nd edition included a wider “allowable” envelope, it was clearly marked “for reference only”, and the focus was primarily still on the “recommended” envelope. In fact it warned that “that prolonged operation outside of the recommended operating ranges can result in decreased equipment reliability and longevity”. As a result, despite the 2008 broadened recommended environmental envelope, most data center managers continued to keep their temperatures at 68°F (or less) and maintain a tightly controlled humidity of 50% RH.

Almost concurrently, The Green Grid introduced the Power Usage Efficiency (PUE) metric in 2007, which was defined as the ratio of power used by the facility compared to the power used by the IT equipment. While initially PUE was slow to take hold, however by 2010 the awareness of the need for energy efficiency was beginning spread throughout the industry. At first, very low PUEs of 1.2 or less made headlines when they were announced by Internet Search and social media giants. This was accomplished by building custom hyper-scale data centers which utilized a variety of leading edge cooling systems designed to minimize cooling energy, such as the use of direct outside air economizers and higher and wider temperature ranges. These designs and broader IT temperatures ranges broke with conventional data center cooling practices, and were not readily accepted by traditional enterprise organizations.

Wider Temperature Ranges Offer Opportunity

However, they also proved that low cost commodity IT equipment could operate fairly reliably over a much wider temperature and humidity range than the typical, more conservative industry practice of 68°F at 50% RH. This ultimately helped spur more facility managers to begin using PUE to focus on analyzing and improving their energy efficiency. It then became much clearer that in most cases cooling used the majority of facility power and therefore offered the greatest opportunity for improvement.

In 2011, ASHRAE provided an early release of the key details of the newly introduced Expanded Allowable Temperature (A1-A4) ranges that would be included the 3rd edition of the Thermal Guidelines (published in 2012). Moreover, the 3rd edition openly encouraged the more common use of non-mechanical (compressor-less) cooling—so-called free cooling—using direct outside air economizers to take maximum advantage of ambient air temperatures (within the newly expanded allowable limits) to cool data centers. This would have been considered pure heresy just a few years prior.

This seeming radical declaration was the result of the information the IT equipment manufacturers who internally shared their projected failure rates vs temperature, over the expanded temperate ranges. As a result of this anatomized data, they created the X-Factor risk projections. The publication of the X-Factor was meant to encourage data center managers to save energy, by providing the information to consider increasing the operating temperature to increase the use of free cooling, while still maintaining acceptable expectations of IT equipment reliability. Although released in 2011, the X-Factor is still highly debated and sometimes misinterpreted. We will delve into the details of these factors in the Server Reliability section.

Next we will explore data center temperature guidelines. If you prefer you can download the Data Center Frontier Special Report on Data Center Cooling Standards in PDF format from the Data Center Frontier White Paper Library courtesy of Compass Data Centers. Click here for a copy of the report.

About the Author

Julius Neudorfer

Julius Neudorfer is the CTO and founder of North American Access Technologies, Inc. (NAAT). NAAT has been designing and implementing Data Center Infrastructure and related technology projects for over 25 years. He also developed and holds a patent for high-density cooling. Julius is a member of AFCOM, ASHRAE, IEEE and The Green Grid. Julius has written numerous articles and whitepapers for various IT and Data Center publications and has delivered seminars and webinars on data center power, cooling and energy efficiency.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

sdf_qwe/Shutterstock.com
Source: sdf_qwe/Shutterstock.com

Five Compelling Reasons to Consider Natural Gas for Data Center Projects

Phil Fischer, client executive for Black & Veatch, explains why new-build data centers are seriously considering natural gas for self-generation of the entire complex or for backup...

White Papers

Chatsworth Cover 2023 08 07 11 57 53

The Data Center Innovation Will Change the Way You Think About Liquid Cooling

Aug. 7, 2023
The demand for high density servers and high-performance computing continues to grow – as does the amount of heat generated by all this computing power. Data center operators ...