Examining the Proposed ASHRAE 90.4 Standard

April 4, 2016
ASHRAE standard 90.4 may increase the cost of preparing documentation during data center design, and could limit innovation by hyperscale pioneers by having local build departments reject or limit their future design options. Learn more in this Data Center Frontier Special Report.

The stated purpose of the of the proposed ASHRAE 90.4P standard is “to establish the minimum energy efficiency requirements of Data Centers for: the design, construction, and a plan for operation and maintenance, and utilization of on-site or off-site renewable energy resources”. The scope covers a) New Data Centers or portions thereof and their systems, b) new additions to Data Centers or portions thereof and their systems, and c) modifications to systems and equipment in existing Data Centers or portions thereof”. It also states that the provisions of this standard do not apply to: a. telephone exchange(s) b. essential facility(ies) c. information technology equipment (ITE).

Data Center Frontier Special Report on Data Center Cooling – Download it Now

This article is the forth in a series on data center cooling taken from the Data Center Frontier Special Report on Data Center Cooling Standards (Getting Ready for Revisions to the ASHRAE Standard)

Mandatory Compliance Though Legislation

The proposed 90.4 standard states that: “Compliance with this standard is voluntary until and unless a legal jurisdiction makes compliance mandatory through legislation”. As previously stated, one of the many data center industry concerns are that the “authorities having jurisdiction”(AHJ). This encompasses many state and local building departments, as well as the federal government, which use 90.1 as a basis for many types of commercial and industrial new buildings or requirements to upgrade existing buildings. Therefore, they will also use 90.4 as their new data center standard, since it is referred to in the revision to the 90.1 standard.  Moreover 90.4 also acts as a more detailed supplement to 90.1 and in fact specifically requires all other compliance with 90.1 building provisions. In addition to any design compromises and increased costs, it seems unlikely that many AHJs local building inspectors may fully understand the complex data center specific issues, such as redundancy levels, or even be familiar with TC9.9 or The Green Grid PUE metric. This could delay approvals or force unnecessary design changes, simply based on how the local building inspector interprets the 90.1 and 90.4 standards.

Mandatory Electrical and Mechanical Energy Compliance (90.4)

Electrical Loss Component (ELC):
The designers and builders of new data centers will need to demonstrate how their design will comply with the highly specific and intertwining mandatory compliance paths. This is defined by the “design electrical loss component (design ELC)” sections. These involve a multiplicity of tables of electrical system energy efficiency minimum standards issues related to IT design load capacity at both 50 and 100 percent. It addition it requires multiple levels of electrical system path losses and UPS efficiencies at 25, 50 and 100 percent loads system at various levels of redundancy (N, N+1, 2N and 2 N+1), electrical system path losses.

Moreover, it delineates three distinct sections in the power chain losses: “incoming service segment; UPS segment and ITE distribution segment” (which extends down cable losses to the IT cabinets). Furthermore, it states that the Design ELC and “shall be “calculated using the worst case parts of each segment of the power chain in order to demonstrate a minimum level of electrically efficient design.”

Mandatory PUE
The second revision of the proposed 90.4 standard explicitly lists mandated maximum PUE ranges from 1.30 to 1.61, each specifically related to a geographic area listed in each of 18 ASHRAE climate zones. However, these PUE value seemed prohibitively low and many felt they would increase initial design and build costs substantially and created a lot of industry concerns, comments and protests. Moreover, it did not take into account the local cost of energy, its availability and fuel sources, or water usage or any potential restrictions or shortages of water, which was recently an issue in California’s ongoing drought. The PUE reference was removed in the next revision; however the issues of local resources remained unaddressed.

Mechanical Loss Component (MLC):
The 3rd revision removed all references to The Green Grid PUE requirements; however it contained highly detailed specific compliance requirements for minimum energy efficiency for the mechanical cooling systems, again specifically listed for each of the 18 climate zones. The 3rd revision has another cooling performance table (again for each of the 18 climate zones), called “design mechanical load component (design MLC)” defined as; the sum of all cooling, fan, pump, and heat rejection design power divided by the data center ITE design power (at 100% and 50%).

One of the other and perhaps significant issues is that all cooling efficiency calculations would seem to preclude the effective use of the “allowable” ranges to meet the mandatory and prescriptive requirements: “The calculated rack inlet temperature and dew point must be within Thermal Guidelines for Data Processing Environments recommended thermal envelope for more than 8460 of the hours per year.” So even if the data center design is intended for new higher temperature IT equipment (such as A2, A3 or A4), it would unnecessarily need to be designed and constructed for the lower recommended range, which could substantially increase cooling system costs.

A3 and A4 Servers

While A3 and A4 servers did not exist in 2011 when the expanded ranges were originally introduced, as of 2015, there were several A4 rated servers on the market whose manufacturer’s specifications state that those models can: “run continuously at 113°F (45°C) —with no impact on reliability”. This new generation of A3 and A4 hardware overcomes the early restrictions by some manufacturers’ regarding limiting the exposure time to higher temperatures.

Concurrently with the release of the 3rd draft of 90.4, ASHRAE also released new Proposed Addendum “cz” to 90.1-2013 for public review has now removed all references mandatory PUE compliance. The addendum provides a clear cut reference transferring all data center energy efficiency requirements to 90.4, which should reduce potential conflict and confusion (other aspects of the building would still need to comply with local building codes). The goal of publishing the final version of 90.4 is the fall of 2016.

Nonetheless while this was a significant issue, why should a data center be still limited to the “recommended” temperature and dew point, by designing a system to meet the mandatory cooling system energy efficiency requirements? It should be up to the operators if and for how long they intend to operate in the expanded “allowable” ranges. This is especially true now that virtually all commodity IT servers can operate within the A2 range (50-95°F). Moreover, Solid State Disks (SSD) now have a much wider temperature range of up to 32°F -170°F. While originally expensive, SSDs continue to come down in price, as well as matching spinning disk storage capacity and are substantially faster delivering increased system throughput, increasing overall server performance and energy efficiency They have become more common in high performance servers and more recently as a cost effective option in commodity servers, which will also eventually result in in greater thermal tolerance as servers are refreshed.

So with all the these additional factors defined in the ASHRAE Thermal Guidelines, and the proposed 90.4 standard, many of which overlap and potentially conflict with each other, how should facility designer and managers decide as to the “optimum” or “compliant” operating environmental conditions in the data center?

Next week we will wrap up this Special Report and share the bottom line on this evolution of data center cooling. If you prefer you can download the Data Center Frontier Special Report on Data Center Cooling Standards in PDF format from the Data Center Frontier White Paper Library courtesy of Compass Data Centers. Click here for a copy of the report.

About the Author

Julius Neudorfer

Julius Neudorfer is the CTO and founder of North American Access Technologies, Inc. (NAAT). NAAT has been designing and implementing Data Center Infrastructure and related technology projects for over 25 years. He also developed and holds a patent for high-density cooling. Julius is a member of AFCOM, ASHRAE, IEEE and The Green Grid. Julius has written numerous articles and whitepapers for various IT and Data Center publications and has delivered seminars and webinars on data center power, cooling and energy efficiency.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

iStock, courtesy of AFL

Hyperscale: The AI Tsunami

AFL's Alan Keizer and Keith Sullivan explore how AI is driving change and creating challenges for data centers.

White Papers

Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300
Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300
Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300
Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300
Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300

The Secret to Building Data Centers Faster is Project Buffering

Aug. 8, 2022
To meet the needs of the rapidly expanding global colocation market, a significant amount of new infrastructure must be built quickly. Project buffering can eliminate many of ...