Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – Jeff Klaus of Intel DCM, David Knapp of Chatsworth Products, Sean Icara from Digital Realty, and Vertiv’s Gary Niederpruem – will discuss significant trends in tools for monitoring and managing data centers.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.
Data Center Frontier: The market for tools to monitor, manage and automate data centers continues to evolve. What are the significant trends in this ecosystem and how effectively are customers using these tools?
Gary Niederpruem: There is a lot of necessary activity in this space. Traditional management tools at times have been difficult and time-consuming to install and have done a better job of collecting data than they have of analyzing it and making it actionable. To be fair, the early generations of data center management tools were designed with traditional enterprise data centers in mind, and they have been adjusting to a new world order ever since.
Today’s management tools are much more sophisticated, but striving to be less complex and easier to install. Most importantly, they not only should collect data, they should analyze and react to it in real time. That’s critical for today’s distributed and software-defined networks, and for enabling and leveraging the Internet of Things.
There has been a fundamental and ongoing shift in the approach to network management. In the past, management tools have been add-ons – hardware and software systems installed across the data center to monitor and manage existing equipment. Today, these capabilities are built into much of the critical infrastructure, and management tools need to continually evolve so that connecting with those systems is easy and one can take advantage of improved machine-to-machine intelligence and communication. The software that unlocks these capabilities needs to be increasingly modular and easy to deploy, and the result is a much more streamlined, efficient – and faster – network management process.
Jeff Klaus: Tools are interesting because they are being commoditized on the low end with SaaS solutions that can blend with existing enterprise infrastructure. On the higher end, especially in the DevOps area, there are still opportunities to differentiate.
Companies divert from one another in how they package together solutions for their customers – many need customized combinations to help keep select business processes running in this scale-out, scale-in world.
David Knapp: Site monitoring has three basic elements: the sensor, network and software. Deploying more sensors closer to equipment is the first trend. This gives data center managers a more granular view of the site. For example, power monitoring at each device outlet and environmental monitoring at every rack. When the software models this additional data, it is possible to see specific rack- and device-level detail on power consumption and environmental conditions. This helps data center managers make better decisions about how to use site capacity.
For the network, there are three trends:
- Consolidation of power, environmental and security monitoring into a single hardware solution;
- The use of IP consolidation to reduce the number of network connections required to support monitoring;
- Improved and less expensive wired and wireless sensors.
Traditionally, power, environmental and security were three separate sub-systems. Addressing consolidation at the hardware level reduces the amount of network infrastructure dedicated to monitoring, and may simplify software setup. The idea of a smart cabinet with a single network connection for all monitoring is now a reality.
For software, there are many trends including integration of access control with power and environmental monitoring; more detailed asset management tools including databases of equipment specifications that speed software setup; better capacity monitoring and predictive modeling to explore the impact of equipment changes; a modular approach to software functionality; and a focus on making systems easier to integrate.
In the future, software-based monitoring directly from the devices, like the solution proposed by Distributed Management Task Force’s Redfish API, may replace external sensors as long as device manufacturers develop a uniform method of metering. Also, software integration may provide a system that overlays facility data, such as that from a DCIM, with compute and network utilization data to help identify the most efficient combinations of software and compute.
NEXT: How the focus on speed to market is affecting the data center supply chain.
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below: