The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Kevin Facinelli of Nortek Air Solutions.
Kevin Facinelli, is Group President for Nortek Data Center Cooling, a segment of Nortek, St. Louis. Nortek Data Center Cooling offers efficient, sustainable cooling solutions to fit the needs of any data center – from chip to plant. Facinelli is an operations and technology specialist with experience in IoT, supply chain optimization, industrial technology and other disciplines. He is the former executive vice president of operations for Daikin Applied Americas, Minneapolis, Minn. Prior to Daikin, Facinelli was vice president of manufacturing and technology, Johnson Controls Inc., Milwaukee, Wis.
Here’s the full text of Kevin Facinelli’s insights from our Executive Roundtable:
Data Center Frontier: More providers are targeting the hyperscale computing market, and more customers appear to be “graduating” to super-sized requirements. How is this market changing, and what are the keys to success in serving the hyperscale sector in 2020 and beyond?
Kevin Facinelli: The data center movement toward 30 MW and more recently to super hyperscale 100 MW facilities were once associated with just large social media captive assets, but now we see colocation operators and other parts of the industry increasing to these sizes, too.
Consequently, these operators are becoming increasingly concerned with operational costs they typically pass on to tenants. As colocation competiveness increases, data center tenants are choosing operators that employ the most sustainable initiatives to keep costs down. So, sustainability is equally important as uptime and efficiency. Sustainability is increasingly coming into play as tenants plan future expansions within the facility, especially when it entails an entire floor or a majority of a building.
The last few years colocations were trying to accommodate the industry’s exponential growth and demand with buildouts within their applications. Now, PUE and WUE are serious sustainability issues they want to improve during retrofits and new construction.
For new facilities, geographical positioning in terms of water and power availability, and climate is critical. Geographic positioning also affects choices for interconnection to intra-company data centers to minimize latency and enable the redistribution of IT loads to nearby locations during peak periods.
Geographical choices and operating costs are key to sustainability. Therefore, many data center operators are looking for liquid cooling equipment that can operate with the lowest PUE, WUE and if possible, take advantage of geographical climates. Geographical location availability becomes significantly more abundant with liquid cooling equipment that offers a variety of operational modes, such as evaporative, adiabatic, super evaporative and others. New technological advances in liquid cooling equipment can automatically switch to the most sustainable operational modes, depending on the ambient temperature and humidity variances throughout the hour, day, week or year.
Data Center Frontier: Artificial intelligence is bringing more powerful chips into the data center. What’s your take on the present and future of rack density, and how it may influence data center equipment and design?
Kevin Facinelli: Industry statistics indicate average rack density has increased from 7.2 kW in 2018 to 8.2 kW in 2019. Averages can be misleading because AI and computational rigorous applications can range from 15 to 30 kW.
Data center operators have to realize which technology offers future relevance. Rack densities have climbed exponentially since we entered the liquid cooling market 25 years ago, and our liquid cooling equipment is now in 10 percent of the top 100 supercomputers listed in the International Supercomputer Conference’s Top500 and Green500. More companies are entering the AI and computational market, but they’re also using open hardware or equipment with higher computational abilities in addition to supercomputers.
Therefore, colocations are now at a crossroads between two growth strategies. Some social media companies continue with 5 kW rack densities, but they’re distributing the increasing loads across a large amount of infrastructure as they grow. The other strategy is from the processor level where high density chip clusters are requiring computational density for training AI models. This strategy generates significantly more heat that must be rejected.
Air (hot aisle/cold aisle) and liquid can both provide adequate cooling for a floor of low and medium density racks, especially since the trend is toward higher ambient cooling temperatures without affecting uptime. However, a problem arises when high density 15 to 30-kW server racks are mixed in with mostly low and medium density racks in a data hall. Should the data center hall’s ambient temperature be lowered to accommodate the higher heat generation at a significant operational cost penalty? Or would direct liquid cooling, such as coolant distribution units (CDU) or rear door heat exchangers, efficiently accommodate the high density racks, while leaving the ambient temperature higher for the low and medium density equipment?
While there are plenty of suppliers of equipment such as CDUs, data center operators should instead find turnkey solution providers that offer the CDU, the plant that supplies the CDU and other distribution choices, the additional materials for complete integration into the data hall, the contractor, servicing, warranties and other services leading to single source responsibility.
Data Center Frontier: What do you see as the most important trends in data center connectivity and interconnection, and how have they been impacted by the COVID-19 pandemic?
Kevin Facinelli: COVID-19 has brought drastic business changes almost overnight. A year ago I might have had one team meeting per week via Internet video conferencing. Now it’s several times daily. The increase in connected team meetings, workers computing remotely and other cause/effect of the pandemic across the globe has strained the network and high speed fiber optic capacities. As network data rates increase, similar to server data rates, data transmissions generate heat density in the server area that also need to be addressed now. Social media companies that were transmitting 100-gig of data are now increasing exponentially to 200, 400 and 800-gig through the same size fiber optic line. Network speed increases are generating significantly more heat densities in the network area that will dictate alternative methods of heat extraction.
Furthermore, COVID-19’s unexpected emergence is causing equipment to be driven harder, therefore sustainability and operational costs have become more important than ever. Equipment is also producing more heat than ever before. So, equally important is the trend of data center waste heat recovery and what type of equipment best reutilizes that heat with minimal thermal losses. Liquids transfer energy 10 times more efficiently than air; therefore certain types of liquid cooling equipment are prime candidates for tapping this trend.
Data Center Frontier: The COVID-19 pandemic is prompting companies to pursue automation to limit health risks. What are the most promising innovations the data center industry can adopt to meet the challenges posed by the pandemic?
Kevin Facinelli: COVID-19 has also put a new emphasis on working remotely and automation. Limited travel and fewer administrator data center onsite visits have given a new value to cooling equipment that’s fully automated. Automatically switching cooling modes, especially with indirect evaporative equipment, can save significant data center operating costs in both water and power. This type of equipment’s built-in control strategies react automatically within minutes to hundreds of programmed outdoor ambient scenarios to provide optimum energy efficiency. These systems also cool the majority of time without the use of traditional, energy-intensive chillers. However, trim chilling, if available, can automatically be employed a few days annually when worse-case outdoor conditions occur. This automation can be programmed for optimum sustainability and synergy with building management systems.
Therefore in a time and place like today where manual operator intervention may not be as convenient, cooling equipment automation is luckily state-of-the-art. An operator might mistakenly believe that the BMS can be programmed to provide this type of pinpoint control, but no two buildings are alike. A data center cooling system with a sophisticated onboard control strategy can successfully navigate the tremendous amount of routine environmental variances directed toward the operator. It can simplify or filter them down to just critical tasks. This helps simplify data center operations that have become complicated with COVID-19.