• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Special Reports / Understanding Data Center Temperature Guidelines

Understanding Data Center Temperature Guidelines

By Julius Neudorfer - March 15, 2016

Understanding Data Center Temperature Guidelines

See Data Center Frontier’s Rich Miller, Compass Datacenters’ Chris Crosby, DLB Associates’ Don Beaty and moderator Yevgeniy Sverdlik of Data Center Knowledge for a lively discussion about data center temperature guidelines, and the controversy surrounding the proposed 90.4 standard at Data Center World on March 17, 2016

LinkedinTwitterFacebookSubscribe
Mail

It is important to note that while closely followed by the industry, the TC9.9 data center temperature guidelines are only recommendations for the environmental operating ranges inside the data center, they are not a legal standard. ASHRAE also publishes many standards, such as 90.1 “Energy Standard for Buildings – Except for Low Rise Buildings” which is used as a reference and has been adopted by many state and local building departments. Prior to 2010, the 90.1 standard virtually exempted data centers. In 2010 the revised 90.1 standard included and mandated highly prescriptive methodologies for data center cooling systems. This concerned many data center designers and operators, especially the Internet and social media sites which utilized a wide variety of leading-edge cooling systems designed to minimize cooling energy. These designs broke with traditional data center cooling designs and could potentially conflict with the prescriptive requirements of 90.1, thus limiting rapidly developing innovations in the more advanced data center designs. We will examine 90.1 and 90.4 in more detail in the Standards section.

Data Center Frontier Special Report on Data Center Cooling - Download it Now

Data Center Frontier Special Report on Data Center Cooling – Download it Now

This article is the second in a series on data center cooling taken from the Data Center Frontier Special Report on Data Center Cooling Standards (Getting Ready for Revisions to the ASHRAE Standard)

Power Usage Effectiveness (PUE)
While the original version of PUE metric became more well known, it was criticized by some since “power” (kW) was an instantaneous measurement at a point in time, and some facilities claimed very low PUEs based on a power measurement made during the coldest day which minimized cooling energy. In 2011 it was updated to PUE version 2 (which is focused on annualized energy rather than power).

The revised 2011 version is also recognized by ASHRAE, as well as the US EPA and DOE, became part of basis of the Energy Star program, as well as becoming a globally accepted metric. It defined four PUE Categories (PUE0-3) and three specific points of measurement. Many data centers do not have energy meters at the specified points of measurement. To address this issue, PUE0 still was based on power, but required the highest power draw, typically during the warmer weather (highest PUE), rather than a best case, cold weather measurement, to negate incorrect PUE claims. The next three PUE categories were based on annualized energy (kWh). In particular PUE Category 1 (PUE1) specified the output of the UPS and was the most widely used point of measurement. The point of measurement for PUE2 (PDU output) and PUE3 (at the IT cabinet), represented more accurate measurement methods of the actual IT loads, but were harder and more expensive to implement. (see graphic).

The Green Grid clearly stated that the PUE metric was not intended to compare data centers, its purpose was only meant as a method to baseline and track changes to help data centers improve their own efficiency. The use of a mandatory PUE for compliance purposes in the 90.1-2013 building standard, and the proposed ASHRAE 90.4 Data Center Energy Efficiency standard, was in conflict with its intended purpose. The issue is also discussed in more detail in the section on ASHRAE standards.

You can see Data Center Frontier’s Rich Miller, Compass Datacenters’ Chris Crosby, DLB Associates’ Don Beaty and moderator Yevgeniy Sverdlik of Data Center Knowledge in a lively discussion about the “warming of the data center,” including how we got here, the impact of current acceptable and allowable temperature ranges on operations and performance, and the controversy surrounding the proposed 90.4 standard at Data Center World on March 17, 2016

Understanding Temperature References
In order to discuss evolving operating temperatures it is important to examine the differences of dry bulb, wet bulb and dew point temperatures.

Dry Bulb
This is the most commonly used type of thermometer referenced in the specification of IT equipment operating ranges. The “dry bulb” thermometer (analog or digital), readings are unaffected by the humidity level of the air.

Wet Bulb
In contrast, there is also a “wet bulb” thermometer, wherein the “bulb” (or sensing element) is covered with a water-saturated material such as cotton wick and a standardized velocity of air flows past it to cause evaporation, cooling the thermometer bulb (a device known as a sling psychrometer). The rate of evaporation and related cooling effect is directly affected by the moisture content of the air. As a result, at 100% RH the air is saturated and the water in the wick will not evaporate and will equal the reading of a “dry bulb” thermometer. However, at lower humidity levels, the dryer the air, the faster the moisture in the wick will evaporate, causing a lower reading by the “wet bulb” thermometer, when compared to a “dry bulb” thermometer. Wet bulb temperatures are commonly used as a reference for calculating the cooling unit’s capacity (related to latent heat load. i.e. condensation- see Dew Point below), while “dry bulb” temperatures are used to specify sensible cooling capacity. Wet bulb temperatures are also used to project the performance of the external heat rejection systems, such as evaporative cooling towers, or adiabatic cooling systems. However, for non-evaporative systems, such as fluid coolers or refrigerant condensers, dry bulb temperatures are used.

Dew Point
Dew point temperature represents the point at which water vapor has reached the saturation point (100% RH). This temperature varies, and its effect can be commonly seen when condensation forms on an object that is colder than the dew point. This is an obvious concern for IT equipment. When reviewing common IT equipment operating specifications, it should be noted that the humidity range is specified as “non-condensing”.

Dew point considerations also become important to address and minimize latent heat loads on cooling systems, such as the typical CRAC/CRAH unit whose cooling coil operates below the dew point, therefore inherently dehumidifies while cooling (latent cooling – requiring energy). This then requires the humidification system to use more energy to replace the moisture removed by the cooling coil. New cooling system can avoid this double-sided waste of energy by implementing dew point control.

Recommended vs Allowable Temperatures
As of 2011, the “recommended” temperature range remained unchanged at 64.4-80.4°F (18-27°C). While the new A1-A2 “allowable” ranges surprised many IT and Facility personnel, it was the upper ranges of the A3 and A4 temperatures that really shocked the industry.

data center temperature guidelines IIWhile meant to provide more information and options, the new expanded “allowable” data center classes significantly complicated the decision process for the data center operator when trying to balance the need to optimize efficiency, reduce total cost of ownership, address reliability issues, and improve performance.

Temperatures Measurements – Room vs IT Inlet
As indicated in the summary of Thermal Guidelines, the temperature of the “room” was originally used as the basis for measurement. However, “room” temperatures were never truly meaningful, since the temperatures could vary greatly in different areas across the whitespace. Fortunately, in 2008, there was an important, but often overlooked change in where the temperature was measured. The 2nd edition referenced the temperature of the “air entering IT equipment”. This highlighted the need to understand and address airflow management issues in response to the higher IT equipment power densities, and the recommendation of the Cold-Aisle / Hot-Aisle cabinet layout.

In the 2012 guidelines there were also additional recommendations for the locations for monitoring the temperatures in the cold aisle. These also covered placing sensors inside the face of cabinet and the position and number of sensors per cabinet, (depending on the power density of the cabinets and IT equipment). While this provided better guidance on where to monitor the temperatures, very few facility managers had temperature monitoring in the cold aisles, much less inside the racks. Moreover, it did not directly address how to control the intake temperatures of the IT hardware.

ASHRAE vs NEBS Environmental Specifications
Although ASHRAE Thermal Guidelines are well known in the data center, the telecommunications industry created environmental parameters long before TC9.9 released the first edition in 2004. The NEBS* environmental specifications provides a set of physical, environmental, and electrical requirements for local exchanges of telephone system carriers. The NEBS specifications have evolved and been revised many times and its ownership has changed as telecommunications companies reorganized. Nonetheless, it and its predecessors effectively defined the standards for ensuring reliable equipment operation of the US telephone system for over a hundred years.

In fact, NEBS is referenced in the ASHRAE Thermal Guidelines. The NEBS “recommended” temperature range 64.4°F – 80.6°F (18-27°C), existed well before the original TC9.9 guidelines, but was not until 2008 in the 2nd edition, that the Thermal Guidelines were expanded to the same values. More interestingly, in 2011, the TC9.9 new A3 specifications now matched the long standing NEBS allowable temperature range of 41-104F. However, it is the NEBS allowable humidity range that would shock most data center operators 5%-85% RH. The related note in the ASHRAE Thermal Guideless states: “Generally accepted telecom practice; the major regional service providers have shut down almost all humidification based on Telecordia research”.

Revised Low Humidity Ranges and Risk of Static Discharge
In 2015 TC9.9 completed a study of the risk of Electro-static Discharge “ESD” and discovered that lower humidity did not significantly increase the risk of damage from ESD, as long as proper grounding was used when servicing IT equipment. It is expected that the 2016 edition of the Thermal Guidelines will expand the allowable low humidity level down to 8%RH. This will allow a substantial energy saving, by avoiding the need to use humidification systems to raise humidity unnecessarily.

*NEBS Footnote: NEBS (previous known as Network Equipment-Building System) is currently owned and maintained by Telecorida which was formerly known as Bell Communications Research, Inc. or Bellcore. It was the telecommunication research and development company created as part of the break-up of the American Telephone and Telegraph Company (AT&T).

Next week we will explore controlling supply and IT air intake temperatures. If you prefer you can download the Data Center Frontier Special Report on Data Center Cooling Standards in PDF format from the Data Center Frontier White Paper Library courtesy of Compass Data Centers. Click here for a copy of the report.

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Compass Data Centers, Data Center Cooling, special report

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.

Comments

  1. biswaji@hotmail.com'Biswajit De says

    May 13, 2020 at 2:54 pm

    Julius
    Read recently your article “Understanding Data Center Temperature Guidelines” published in Data Center Frontier few years back. Extremely well authored and very useful for the reader trying to understand the historical background of changing the environmental parameters.
    Personally, I have been struggling to understand the logic behind the current humidity requirement and hope you can shed some light on this.
    Until 2008, maximum allowable range was specified as 20 – 80% for non-condensing environment. This however, changed in 2011 when a dew point (DP) limit was introduced. ASHRAE never seemed to have explained clearly why it was needed. All corrosion related failure including conductive anodic filament (CAF) growth, seemed to be affected by relative humidity. It is not clear enough how DP is impacting the reliability. Today, many hyperscale data center operators, operate with 100% direct evaporative cooling in many new regions and save CAPEX. Since evaporative cooling cannot control the dew point condition unlike chiller based cooling, many of the data centers often see excursion of dew point up to maximum 81F but still be able to maintain 80% RH by mixing return air with outdoor air during free cooling mode. Can you please help me understand the background of having a maximum dew point limit and why it is needed. As I said, many enterprise and hyperscale operator are already operatring their data center at higher dew point limit without having any reliability issues or failures. Thanks

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Six Ways Edge Computing is Changing the Digital Ecosystem Landscape

Six Ways Edge Computing is Changing the Digital Ecosystem Landscape The pandemic exposed a nation-wide digital divide. Brad Alexander of DartPoints, outlines how edge computing is bridging that gap and changing the digital ecosystem landscape.

White Papers

fiber networks

Future-Proofing the Distributed Data Center with High-Speed Fiber Networks

Data centers are evolving from centralized facilities into a decentralized network of resources tied together by a robust cloud infrastructure. Belden explains how high-speed fiber networks can future-proof distributed data centers.

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Critical Power Energy Manager - Data Center Development - Ashburn, VA
  • Site Development Manager - Data Center - Ashburn, VA
  • Data Center Facility Operations Director - Chicago, IL
  • Electrical Engineer - Senior - Dallas, TX
  • Mechanical Commissioning Engineer - Calgary, Alberta

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Endeavor Business Media© 2022