Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives weighs in on the the state of data center metrics. Our panelists include Jeff Klaus of Intel, Amber Caramella from Infrastructure Masons and Netrality, Steve Dick of Emcor Enclosures and Sean Farney from KOHLER Power Systems.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Each day this week we will present a Q&A with these executives on one of our key topics. Here’s today’s discussion:
Data Center Frontier: Our industry is all about the data. But how good are our metrics? What are the strengths and opportunities in how the data center industry can measure and manage effectiveness? What’s going well and what’s missing?
Steve Dick, Emcor Enclosures: Data center metrics used to be relatively straightforward. Key metrics like PUE (Power Usage Effectiveness), CCF (Cooling Capacity Factor), Server power efficiency, and ASHRAE temperature and humidity standards dominated in the early years. Datacenter Infrastructure Management (DCIM) has since added real-time capacity by resource, energy cost, rack/aisle/floorspace capacity, peak load per rack/cabinet, hot spot identification and duration, power resiliency, physical security, building infrastructure status, and others.
As cloud computing has become commonplace, a new discipline of Hybrid Digital Infrastructure Management (HDIM) has emerged, focusing on workload placement, talent management, geodiversity, and business-relevant insights leveraging big data and artificial intelligence. As more businesses and industries invest in digital business processes, the role of IT is challenged with the assembly and operation of digital infrastructure based on business need, an ecosystem of diverse edge/cloud/traditional computing providers, and a new IT talent mindset focused on agility and flexibility. Today’s datacenter management and metrics need to be focused on the quality of customer experience, automating the provisioning and configuration of resources, and analyzing the performance of those resources — wherever they are.
Jeff Klaus, Intel: Getting granular data these days is essential to get in front of issues quickly.
That can still be a challenge because of complexity and the different areas of knowledge each operation group has, but It’s really important to get information on the utilization and health of the assets deployed in the data center as well as there demand of DC infrastructure.
We are seeing a convergence of thought on that perspective as well as a retooling of monitoring and orchestration tools to mine data from the IT device all the way up to the room and DC level. If you can’t see it, you can’t measure it!
Amber Caramella, Infrastructure Masons: Data acquisition, management, and implementation has a multi-faceted impact and approach on the data center industry. More visibility into customers’ environments enables data center operators to design data center solutions more effectively that meet end user needs and enable them to plan for future growth.
Monitoring of customer power utilization at the rack level provides a clear picture of what is happening in their space over time. Air flow and temperature monitoring at the data hall level, opposed to the individual customer level, is key to understanding how swings in power utilization affect cooling in the space. Insights derived from this data set help operators propose new ways to improve the efficiency of existing customers’ spaces, and design future customer spaces.
Data can also be leveraged to help data center operators enhance their sustainability efforts through reduced energy usage and greenhouse gas (GHG) emissions. Analytics obtained through AI provide visibility that can be used to make real-time, data-driven adjustments to power utilization and cooling delivery – reducing a space’s energy consumption, water waste, and greenhouse gas emissions. In addition, decreasing the load on the cooling equipment prolongs its lifespan and reduces the need to refresh infrastructure that has reached the end of its lifecycle.
The development and maintenance of highly effective, reliable, secure, and innovative systems can facilitate the collection, storage, security, and integrity of electronic data while ensuring appropriate access amongst collaborative, cross-departmental functions.
Data when organized as a unified source of truth within a technology ecosystem should optimize and standardize a service model. The use of software tools such as ERP, DCIM, and CRM allow providers to gather information on usage, track performance, availability, and provide a platform to interact with customers enhancing customer experience. DCIM solutions can also standardize monitoring of cooling, customer power utilization, customer inventories, and other contracted services—streamlining processes and procedures.
Sean Farney, Kohler Power: Well Kermit was right, “it’s not easy being green.” And because we don’t have a unified way to measure multi-variate sustainability across the entire data center consumption ecosystem- power, water, and emissions – it’s not easy to manage, measure, and continuously improve on industry sustainability standards. This is an opportunity for us to be proactive before these pressures become acute. We have a great story to tell and need a metric, like Christian Belady’s PUE, to frame it. A more in-depth discussion can be found here: PUE to GUE: Metrics Must Keep Pace With Data Center Sustainability (datacenterfrontier.com).
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with DCF on LinkedIn, and signing up for our weekly newspaper using the form below: