How DCIM is Different Today

June 21, 2022
In the early days of DCIM software tools, data centers tended to be centralized for a given organization and few in number. Learn how you can apply new DCIM solutions to improve operations and your bottom line in this article featuring Schneider Electric. 

Last week we launched an article series on the evolution of software to measure and manage the data center. This week we’ll take a deeper look at the solutions and systems managing some of the most critical infrastructures in the world. As it relates to DCIM, it’s key to understand what has changed and how you can apply new management solutions to improve operations and your bottom line.  

Get the full report

In the early days of DCIM software tools, data centers tended to be centralized for a given organization and few in number. This began to change when cloud computing and data center colocation providers emerged. Enterprises increasingly have owned and leased IT physical and virtual assets spread across more locations. This is referred to as “hybrid IT” or “hybrid computing environments.” This growing sophistication complicates operations and maintenance by making it more challenging to maintain visibility and control over all assets.

A significant change in controlling today’s most advanced cloud and data center environments revolves around visibility, monitoring, and management. New DCIM solutions, used by some of the most prominent vendors in the industry, are designed to add intelligence to the data center, using dynamic solutions to derive actionable information about equipment locations, connectivity points, power usage, and power capacity. Armed with this information, organizations can identify areas for ongoing optimization of data center operations.

DCIM solutions serve as a “bookend,” working cohesively together at each end of the physical infrastructure. At one end, DCIM solutions bridge the IT stack by delivering visibility into asset and connectivity management to help streamline capacity management efforts and accelerate work order management tasks. At the other end, DCIM bridges the facilities stack by monitoring power usage and cooling efficiencies to help drive operational effectiveness and data center uptime.

Well-implemented, DCIM improves the availability and resiliency of physical infrastructure systems and the IT workloads they support.

However, much has changed since DCIM suites became commonly used toolsets. Data center owners and managers face new challenges and requirements such as accelerating small, unstaffed IT deployments at the edge, meeting corporate sustainability goals, and defending against cybersecurity threats.

At the same time, recent technology developments offer new possibilities. This article series describes the essential functions of a DCIM platform deployed for hybrid IT environments. Just below, we’ll explain five attributes that make a DCIM software suite more effective at providing those functions to take advantage of emerging technology trends. Modern DCIM systems will simplify deployment and management regardless of the number of assets and sites, optimize operations and maintenance through analytics and new digital services, and will provide the data and tools necessary to support integration with existing management apps and services such as 3rd party monitoring and management services, building management systems (BMS), and electrical power monitoring systems (EPMS). The image below shows a high-level modern, cloud-based DCIM system architecture optimized for hybrid IT environments.

As our environments grow, they become more complex, and working with standardization and visibility tools makes managing those ecosystems easier.

With this in mind, let’s examine the human factor related to modern data center management.

Outages and human error: How DCIM impacts modern management issues

As many leaders want to improve operations, become better with sustainability, and help their staff become less tired, some emerging challenges face digital infrastructure. These challenges include:

  • Distributed infrastructure. This is quickly becoming the new normal, and the way we connect and leverage resources are far different than just a few years ago. Today, more extensive infrastructure portfolios are helping organizations stay ahead of a very diverse and digital market. This translates to strained visibility levels where edge ecosystems and smaller locations may not have the observability you ultimately require to scale your business.
  • Unclear root causes. Taking too long to find out the root cause of an issue can waste time and resources. Yes, you’re capable of resolving a problem quickly. But what happens when you don’t know what happened in the first place? Fixing a critical issue is essential, and it is even more vital to ensure that it doesn’t happen again.
  • Admin fatigue. Already data center and IT administrators are tasked with maintaining critical infrastructure. You create the ‘ swivel chair ‘ analysis issue when too many dashboards, alerts, and sensors are going off. This means administrators might miss key issues or simply not know which screen to look at. Without good observability, the human element quickly becomes stressed and fatigued.
  • Lack of standardization. Disparate systems, too many non-integrated components, and silo operations lead to a lack of standardization. This also creates problems with compliance and regulation. Another major issue when there is a lack of standardization is observability across the entire portfolio. As our environments grow, they become more complex, and working with standardization and visibility tools makes managing those ecosystems easier.
  • Challenges with scale. There’s just no good way to scale a business effectively without good visibility. This is the case as our digital footprints become more distributed. Being able to scale at the pace of a digital market means having visibility into the entire ecosystem. This also means being able to scale based on specific business requirements predictively.
  • Loss of productivity and efficiency. The human element can either be a savior or a detriment to the business. When people are the slower part of the process, they often don’t have the tools to succeed. Remember, your data center footprint is changing. Whether you’re investing in edge, IoT, or new data center campuses, you need to support people, process, technology, and innovation. Without good tools in place, people quickly become lost and potentially disenfranchised. All of this leads to a lack of efficiency and a loss of productivity.
  • Sustainability issues. There is no way to effectively deliver on ESG promises and green initiatives without observability across the entire data center and infrastructure portfolio. Today, there are new investments into greener and more sustainable solutions. However, organizations without good levels of visibility simply won’t know where to invest time and resources, which can lead to poor performance of some systems where sustainability is hurt. Leaders in the data center space know that observability in a digital infrastructure requires a new approach that focuses on the business, user, environment, and process.

As mentioned earlier, people are being tasked with doing more, watching more screens, and conducting more manual tasks. IDC estimates that human error costs organizations more than $62.4 million every year. A significant part of errors created by humans is because of tedious tasks and manual processes. Further, a recent Uptime Institute study points out that more than 70% of all data center outages are caused by human error and not by a fault in the infrastructure design. What does this cost when it all goes down? Quite a bit. Data center outages are expensive!

Source, Source

Disasters and outages occur with and (often) without warning, leaving severe business disruption in their wake. At the same time, increasing dependency on the data center means that outages and downtime are growing costlier over time. According to a 2016 Ponemon study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357. Now, it averages out to about $9000 per minute! Throughout their research of 63 data center environments, the study found that:

  • The cost of downtime has increased 38 percent since the first study in 2010.
  • Downtime costs for most data center-dependent businesses are rising faster than average.
  • Maximum downtime costs have increased 32 percent since 2013 and 81 percent since 2010.

With all of this in mind, what does a modern DCIM architecture look like? What are the five key attributes to examine?

DCIM Attributes: Five considerations

A modern DCIM platform optimized for hybrid IT environments is defined by five key attributes. These attributes differentiate them from standard, on-premise DCIM systems designed for a single or small number of larger data centers. Adopting a platform based on these attributes will put you on the path of benefiting from newer, evolving technologies such as machine learning and predictive analytics. Note, cloud computing technologies (attribute #1) enable the other attributes and are fundamentally what makes these suites most effective at achieving the functions described above, thereby solving today’s hybrid IT management challenges.

  1. Uses cloud technologies for ease of implementation, scalability, analytics, and maintenance
  2. Connects to a data lake, enabling insight and event prediction with artificial intelligence (AI)
  3. Uses mobile and web technologies and integrates with 3rd party platforms
  4. Prioritizes simplicity and intuitive user experiences in its design
  5. Serves as a security compliance tool to identify and eliminate potential cybersecurity threats

Download the entire paper, “The Software Defined Bottom Line,” courtesy of Schneider Electric, to learn more. In our next article, we’ll touch on a few of these considerations. Catch up on the previous article here

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

CoolIT Systems
Source: CoolIT Systems

Selecting the Right Coolant Distribution Unit for Your AI Data Center

Ian Reynolds, Senior Project Engineer with CoolIT Systems, outlines considerations for selecting the best CDU for your needs.

White Papers

Get the full report

The Data Center Human Element: Designing for Observability, Resiliency and Better Operations

March 31, 2022
To meet the new demands being placed on data centers, industry leaders must rethink the way they approach their environment, delivery model and how they can leverage the cloud...