• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Special Reports / How DCIM is Different Today

How DCIM is Different Today

By Bill Kleyman - June 21, 2022 Leave a Comment

How DCIM is Different Today

It’s key to understand how DCIM has changed and how you can apply new management solutions to improve operations and your bottom line. (Source: Schneider Electric)

LinkedinTwitterFacebookSubscribe
Mail

Last week we launched an article series on the evolution of software to measure and manage the data center. This week we’ll take a deeper look at the solutions and systems managing some of the most critical infrastructures in the world. As it relates to DCIM, it’s key to understand what has changed and how you can apply new management solutions to improve operations and your bottom line.  

software

Get the full report

In the early days of DCIM software tools, data centers tended to be centralized for a given organization and few in number. This began to change when cloud computing and data center colocation providers emerged. Enterprises increasingly have owned and leased IT physical and virtual assets spread across more locations. This is referred to as “hybrid IT” or “hybrid computing environments.” This growing sophistication complicates operations and maintenance by making it more challenging to maintain visibility and control over all assets.

A significant change in controlling today’s most advanced cloud and data center environments revolves around visibility, monitoring, and management. New DCIM solutions, used by some of the most prominent vendors in the industry, are designed to add intelligence to the data center, using dynamic solutions to derive actionable information about equipment locations, connectivity points, power usage, and power capacity. Armed with this information, organizations can identify areas for ongoing optimization of data center operations.

DCIM solutions serve as a “bookend,” working cohesively together at each end of the physical infrastructure. At one end, DCIM solutions bridge the IT stack by delivering visibility into asset and connectivity management to help streamline capacity management efforts and accelerate work order management tasks. At the other end, DCIM bridges the facilities stack by monitoring power usage and cooling efficiencies to help drive operational effectiveness and data center uptime.

Well-implemented, DCIM improves the availability and resiliency of physical infrastructure systems and the IT workloads they support.

However, much has changed since DCIM suites became commonly used toolsets. Data center owners and managers face new challenges and requirements such as accelerating small, unstaffed IT deployments at the edge, meeting corporate sustainability goals, and defending against cybersecurity threats.

At the same time, recent technology developments offer new possibilities. This article series describes the essential functions of a DCIM platform deployed for hybrid IT environments. Just below, we’ll explain five attributes that make a DCIM software suite more effective at providing those functions to take advantage of emerging technology trends. Modern DCIM systems will simplify deployment and management regardless of the number of assets and sites, optimize operations and maintenance through analytics and new digital services, and will provide the data and tools necessary to support integration with existing management apps and services such as 3rd party monitoring and management services, building management systems (BMS), and electrical power monitoring systems (EPMS). The image below shows a high-level modern, cloud-based DCIM system architecture optimized for hybrid IT environments.

As our environments grow, they become more complex, and working with standardization and visibility tools makes managing those ecosystems easier.

With this in mind, let’s examine the human factor related to modern data center management.

Outages and human error: How DCIM impacts modern management issues

As many leaders want to improve operations, become better with sustainability, and help their staff become less tired, some emerging challenges face digital infrastructure. These challenges include:

  • Distributed infrastructure. This is quickly becoming the new normal, and the way we connect and leverage resources are far different than just a few years ago. Today, more extensive infrastructure portfolios are helping organizations stay ahead of a very diverse and digital market. This translates to strained visibility levels where edge ecosystems and smaller locations may not have the observability you ultimately require to scale your business.
  • Unclear root causes. Taking too long to find out the root cause of an issue can waste time and resources. Yes, you’re capable of resolving a problem quickly. But what happens when you don’t know what happened in the first place? Fixing a critical issue is essential, and it is even more vital to ensure that it doesn’t happen again.
  • Admin fatigue. Already data center and IT administrators are tasked with maintaining critical infrastructure. You create the ‘ swivel chair ‘ analysis issue when too many dashboards, alerts, and sensors are going off. This means administrators might miss key issues or simply not know which screen to look at. Without good observability, the human element quickly becomes stressed and fatigued.
  • Lack of standardization. Disparate systems, too many non-integrated components, and silo operations lead to a lack of standardization. This also creates problems with compliance and regulation. Another major issue when there is a lack of standardization is observability across the entire portfolio. As our environments grow, they become more complex, and working with standardization and visibility tools makes managing those ecosystems easier.
  • Challenges with scale. There’s just no good way to scale a business effectively without good visibility. This is the case as our digital footprints become more distributed. Being able to scale at the pace of a digital market means having visibility into the entire ecosystem. This also means being able to scale based on specific business requirements predictively.
  • Loss of productivity and efficiency. The human element can either be a savior or a detriment to the business. When people are the slower part of the process, they often don’t have the tools to succeed. Remember, your data center footprint is changing. Whether you’re investing in edge, IoT, or new data center campuses, you need to support people, process, technology, and innovation. Without good tools in place, people quickly become lost and potentially disenfranchised. All of this leads to a lack of efficiency and a loss of productivity.
  • Sustainability issues. There is no way to effectively deliver on ESG promises and green initiatives without observability across the entire data center and infrastructure portfolio. Today, there are new investments into greener and more sustainable solutions. However, organizations without good levels of visibility simply won’t know where to invest time and resources, which can lead to poor performance of some systems where sustainability is hurt. Leaders in the data center space know that observability in a digital infrastructure requires a new approach that focuses on the business, user, environment, and process.

As mentioned earlier, people are being tasked with doing more, watching more screens, and conducting more manual tasks. IDC estimates that human error costs organizations more than $62.4 million every year. A significant part of errors created by humans is because of tedious tasks and manual processes. Further, a recent Uptime Institute study points out that more than 70% of all data center outages are caused by human error and not by a fault in the infrastructure design. What does this cost when it all goes down? Quite a bit. Data center outages are expensive!

Source, Source

Disasters and outages occur with and (often) without warning, leaving severe business disruption in their wake. At the same time, increasing dependency on the data center means that outages and downtime are growing costlier over time. According to a 2016 Ponemon study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357. Now, it averages out to about $9000 per minute! Throughout their research of 63 data center environments, the study found that:

  • The cost of downtime has increased 38 percent since the first study in 2010.
  • Downtime costs for most data center-dependent businesses are rising faster than average.
  • Maximum downtime costs have increased 32 percent since 2013 and 81 percent since 2010.

With all of this in mind, what does a modern DCIM architecture look like? What are the five key attributes to examine?

DCIM Attributes: Five considerations

A modern DCIM platform optimized for hybrid IT environments is defined by five key attributes. These attributes differentiate them from standard, on-premise DCIM systems designed for a single or small number of larger data centers. Adopting a platform based on these attributes will put you on the path of benefiting from newer, evolving technologies such as machine learning and predictive analytics. Note, cloud computing technologies (attribute #1) enable the other attributes and are fundamentally what makes these suites most effective at achieving the functions described above, thereby solving today’s hybrid IT management challenges.

  1. Uses cloud technologies for ease of implementation, scalability, analytics, and maintenance
  2. Connects to a data lake, enabling insight and event prediction with artificial intelligence (AI)
  3. Uses mobile and web technologies and integrates with 3rd party platforms
  4. Prioritizes simplicity and intuitive user experiences in its design
  5. Serves as a security compliance tool to identify and eliminate potential cybersecurity threats

Download the entire paper, “The Software Defined Bottom Line,” courtesy of Schneider Electric, to learn more. In our next article, we’ll touch on a few of these considerations. Catch up on the previous article here. 

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Cloud, Data Center Frontier Special Report, Data Center Management, DCIM software, Schneider Electric

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.
bill@kleyman.org'

About Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Currently, Bill works as the Executive Vice President of Digital Solutions at Switch.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Mitigate Risk, Improve Performance and Decrease Operating Expenses through Data Center Self-Performance

Mitigate Risk, Improve Performance and Decrease Operating Expenses through Data Center Self-Performance If a vendor conducts the actual work in your data center, then you or your operator aren’t maximizing your current operating resources and are experiencing incremental cost and risk. Chad Giddings of BCS Data Center Operations, explains the importance of your data center provider having a high-degree of self-performance.

White Papers

The Cloud is Better at the Edge

This white paper explains how edge computing helps to solve many of the challenges associated with using the cloud for your digital transformation strategy.

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Electrical Commissioning Engineer - Los Angeles, CA
  • Data Center Construction Project Manager - Ashburn, VA
  • Critical Power Energy Manager - Data Center Development - Dallas, TX
  • Data Center Facilities Operations VP - Seattle, WA
  • Senior Electrical Engineer - Data Center - Dallas, TX

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Endeavor Business Media© 2022