For the Balance Sheet and the Sake of the Planet, It’s Time Data Centers Reduce Power Consumption By Improving Utilization
In this edition of Voices of the Industry, Eric Xie, Software Application Engineer at Intel® Data Center Management Solutions shares insights on how data centers can reduce power consumption to benefit both the bottom line and the planet.
In large-scale data centers populated with thousands of servers, an enormous amount of power is consumed every minute of every hour, costing hundreds of thousands of dollars every month—and millions every year.
It goes without saying (but still bears mentioning) that this is a major point of concern for IT and infrastructure management teams, as well as for their CFO overseeing the balance sheet. Compounding matters, Gartner estimates that ongoing power costs are rising at least 10 percent per year due to cost per kilowatt-hour (kwh) increases.
Some of the world’s largest data centers can each contain many tens of thousands of IT devices and require more than 100 megawatts (MW) of power capacity. A data center located in Langfang, China, 25 minutes from Beijing, occupies more than six million square feet of space, the equivalent of 110 football fields.
On a global level, data centers consume approximately 200 terawatt-hours (TWh) of electricity, or nearly one percent of global electricity demand, while contributing to 0.3% of all global CO2 emissions, according to the International Energy Agency. While one percent might not seem like much, data center energy usage in some countries could increase to levels of 15 to 30 percent of their total domestic electricity consumption by the end of the decade, according to predictive models by Eric Masanet and Nuoa Lei of Northwestern University.
And yes, with climate change now top-of-mind among most all enterprise CEOs, today’s data center managers and IT leaders must think globally while acting locally, at least until Jeff Bezos can successfully colonize the moon, and especially on behalf of those of us who would prefer to remain earthside.
High Density Compute Brings on the Heat
Since 40 percent of a data center’s consumed power is typically spent on cooling while 50 percent is attributable to servers and network devices, increasing the set-point temperature of server rooms is one way to save power. By using a data center management solution’s cooling analysis function, for example, IT staff can lower cooling costs by safely raising the temperature of the room, thereby improving power usage effectiveness (PUE) and energy efficiency, while continuously monitoring hardware for temperature issues.
This capability can significantly lower annual cooling costs across an organization’s entire data center footprint. For example, one global cybersecurity company was able to raise the temperatures in its server rooms by 3°C, based on the historical temperature readings of each of their servers, making possible a 25 percent reduction in cooling costs for the year.
Today’s high-density computing environments present even greater liabilities because of the heat produced by continuous processing. If a data center manager lacks visibility into actual device power consumption, this may lead IT staff to overprovision and drive energy usage far beyond the levels needed to maintain safe cooling margins.
It might be surprising to some, but even the cloud, commonly touted for its raison d’être cost efficiency, is replete with operational inefficiencies.
Consider a recent Granulate survey of senior IT professionals at companies spending nearly $1 million annually on cloud computing, which found that for more than half, CPU utilization is only between 20-40 percent, despite the fact that respondents listed improving performance as a top priority.
As Asaf Ezra, Co-Founder and CEO of Granulate has written, “It is these very underutilized, partially idle servers that continue to consume substantial amounts of energy, imposing unnecessary costs on businesses and contributing to tens if not hundreds of millions of tons of CO2 emissions.”
Fortunately, there are data center management solutions that, along with providing visibility on power, thermal consumption, and server health, can improve utilization by furnishing real-time data for better decision-making, which in turn can reduce power consumption and associated costs while mitigating environmental impact.
As Demand Continues to Surge, Good Data Means Everything
Even before the public health crisis threw the digital transformation of business, government and society into overdrive, the demand for data center services had been increasing as the number of internet users continued to grow worldwide. This, again, gives rise to concerns about growing data center energy use, and cost.
In the face of such demand, IT managers often complain that the business teams ask them to set up new servers for new projects instead of repurposing existing servers. But to be fair to both parties, the IT teams don’t have enough data to request the business team to hybrid deploy the applications, since the IT team cannot access the server OS, and they can’t glean any server utilization data. Especially when demand for data center compute surges, access to good data means everything.
In large data centers, the lack of sufficient workload performance monitoring typically leads IT administrators to purchase more hardware. But with data center management solutions, data center operators are able to quickly detect and analyze underutilized systems by monitoring their CPU utilization and power consumption over time.
Data center management solutions can offer a way to monitor server utilization without OS level access, by extracting utilization data from server’s BMC (Baseboard Management Controller). This is referred to as out-of-band management (OOBM). IT management teams can use such solutions to access real time CPU, memory, and IO utilization data and find underutilized servers easily. They can also monitor the actual power consumption, and power consumption trends, by the hour, day, week, etc.
This data center management solution functionality enables IT management teams to provide business teams the real data they need to help them repurpose existing servers instead of deploying new hardware. Additionally, this data can help the IT team report to the business team systems that are underutilized, so they are powered off the projects or purpose they served, once they are no longer required.
Especially in large data centers with thousands of servers, steadfastly providing the digital services that enrich our lives and we’ve all come to rely on, every kilowatt-hour saved through improved utilization matters, both for the sake of the corporate balance sheet, and for the sake of our planet.
Eric Xie, Software Application Engineer, Intel® Data Center Management Solutions. Intel® Data Center Manager (Intel® DCM) is a software solution that collects and analyzes the real-time health, power, and thermals of a variety of devices in data centers helping you improve the efficiency and uptime.