The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Jeff Klaus of Intel.
As General Manager of Intel Data Center Software Solutions, Jeff Klaus leads a global team that designs, builds, sells and supports Intel DCM, the only software that provides real-time, server-level data on the power and thermal conditions across a wide range of data center servers and other equipment. Provided as an SDK, Intel DCM middleware is integrated into Data Center Infrastructure Management (DCIM) consoles to increase data center power and thermal efficiency.
Since joining Intel in 2000, Klaus’ accomplishments have been recognized by multiple division awards. With a broad background in software solutions for the channel, client and SMB space, he has served as Director of Media Programs within Intel’s Digital Home Group, Entertainment Content Marketing Manager, Business Operations Manager, and Software Marketing Manager.
An accomplished speaker, Klaus has presented at such industry forums as Gartner Data Center Summit, AFCOM’s Data Center World, the Green IT Symposium, and the Green Gov conference. He has authored articles on data center power management in Data Center Post, IT Business Edge, Data Center Knowledge, Information Management and Data Centre Management. Klaus currently serves on the Board of Directors for the Green IT Council. Klaus earned his BS in Finance at Boston College and his MBA in Marketing at Boston University.
Here’s the full text of Jeff Klaus’ insights from our Executive Roundtable:
Data Center Frontier: This year we have seen strong demand for data center space in international markets. What are the biggest opportunities and challenges for data center companies in operating at global scale and working with multi-national clients?
Jeff Klaus: Globally we are still seeing increasing data center growth. As noted in a recent report, “the seven primary U.S. data center markets saw 171 megawatts (MW) of net absorption in H1 2019, nearly 57 percent of 2018’s full-year record. That absorption nearly eclipsed the 200 MW of capacity added in H1. Northern Virginia, the largest data center market in the world, accounted for 74 percent of net absorption in the primary markets.”
That interest has several drivers that I believe carry forward regardless of the potential geopolitical issues. Those drivers include, corporate multi-cloud usage, 5G adoption, AI, and blockchain utilization (outside of cryptocurrency). Challenges will continue to be competitive pressure from demand to meet the megawatt absorption we continue to see.
I think providers will have to innovate in their software solution stack and partnerships to maintain a differentiation and capture more demand.
Data Center Frontier: There’s currently huge interest in interconnection and network services. What are the most significant trends in the network features customers are seeking, and how providers are delivering these services?
Jeff Klaus: Software defined networking, specifically SD-WAN and vCPE, are really coming into their own now. For example we’re getting use cases that demonstrate how application recognition software can utilize policy rules to dynamically route traffic through preferred tunnels back to corp or straight out to the internet depending on identified variables (including SLA). That functionality really enables global expansion by utilizing network capabilities effectively.
Of course 5G is the other side of that networking discussion, and will continue to be a hot topic because it will enable so many device interactions in the future and innovations.
Data Center Frontier: The speed of data center deployment is accelerating, with innovation in the supply chain and how facilities are built and leased. What do you see as the most important issues to address to keep pace with the rapid growth of digital infrastructure?
Jeff Klaus: Software defined capabilities become really attractive when you’re looking to efficiently utilize constrained resources. Also from within the server there are some great examples of how to optimize:
- Looking at newer server OOB technology to drive better datacenter utilization of power and space from real-time telemetry.
- Developing optimized memory strategies (for DCPMM) in program design will benefit cloud and HPC performance.
- Implementing compute utilization strategies (SST) to right size need dynamically.
Data Center Frontier: What lies ahead for data center automation? What are the key opportunities in using data (and perhaps even AI) to make data centers more efficient and productive?
Jeff Klaus: That is a great question. AI is becoming a consultative source that helps operators zero in on issues, and I think that will continue to be its focus for the near future. The challenge is the data ingestion and processing to maintain that capability is rather large, and I think it may have to become more distributed to really maximize efficiency.
For example: utilizing server and subcomponent OOB analysis and telemetry (from individual servers) can help identify areas of concern or underuse without need to centralize all the data into a data lake, analyzing the data, and finally building an action plan.