How to Simplify Distributed IT Management? Focus on the Tools to Improve Flexibility and Efficiency

Jan. 6, 2021
Russell Senesac, Director of Data Center Strategy at Schneider Electric, explores how to address distributed IT management and the tools available today.

Russell Senesac, Director of Data Center Strategy at Schneider Electric, explores  how to address distributed IT management. 

Russell Senesac, Director of Data Center Strategy at Schneider Electric

Today’s businesses depend on data to inform strategies, improve processes and better interact with their customers. But as businesses rely on an increasing amount of data to drive digital initiatives forward, they are also tasked with processing, analyzing and storing this data to meet both operational and customer needs. According to IHS Markit, the focus on digitization and IoT will lead to an estimated 125 billion connected devices by 2030. Within the same time period, global data transmissions are expected to increase from 20-25% annually to 50% per year on average.

The ability for organizations to generate, analyze and successfully use this data will dictate their competitive advantage—and their ability to innovate—in a connected world. Given our dramatic shift to digital business, data centers have emerged as the backbone to our digital lives. In fact, today’s data centers are tasked with supporting the processing, transport, storage and security needs of an increasing amount of data. Different applications have different requirements including storage, performance and latency. For this reason, IT teams are no longer relying on traditional IT environments, but a diverse hybrid IT environment inclusive of cloud and large on-prem as well as distributed edge data centers. What does this shift mean for data center operations teams?

The Evolution of Hybrid IT Demands New Tools to Improve Flexibility, Scalability and Efficiency

Enterprises today are increasingly demanding always-on availability, predictive maintenance capabilities, data security and enhanced standardization from their IT infrastructure to improve flexibility, reduce complexity and enhance scalability. As a result, we’re starting to see a shift in where compute and storage lies in relation to where data is actually being generated and consumed. Approximately 10% of enterprise-generated data is created and processed outside a centralized data center or cloud today, but Gartner predicts this figure will reach 75% by 2025.

Traditional, centralized compute is shifting towards the local edge, which is revolutionizing how businesses can manage their critical IT environments. The same data center infrastructure management (DCIM) that we’ve relied on for on-premise IT operations is no longer suitable for today’s hybrid, edge and on-demand data centers. Organizations need to be able to meet current and future demands of their complex and diverse data center environments to truly drive business forward.

To keep up with rising demands from customers, enterprises need to be able to manage their distributed IT infrastructure with greater flexibility and ease. As edge computing explodes, and the number of assets and sites increases, businesses must take a hard look at how they’re monitoring and managing their infrastructure to ensure optimal performance, responsiveness and proactive maintenance.

The ability for organizations to generate, analyze and successfully use this data will dictate their competitive advantage—and their ability to innovate—in a connected world.

Key Considerations to Ensuring Always-on Availability

While traditional DCIM tools have been relied upon for years, they can be too complex to maintain and aren’t intuitive enough to provide operations teams with the right level of insight. Instead, next-gen DCIM tools that offer edge and distributed IT monitoring and services can solve key pain points such as downtime prevention, remote monitoring and the management of distributed equipment and sites. With the right monitoring tools in place, enterprises will be better equipped to handle the needs of today’s complex edge and hybrid IT environments. But when investing in such capabilities, here are a few key considerations operations team should keep in mind:

  • Predictive analytics must be paramount: Data center management has become simplified through the use of software, hardware and sensors to monitor critical systems from a central dashboard in real time. But often, operators can become overwhelmed by the sheer number of alerts being fed into their system. Instead, operators should look to invest in tools that utilize analytics to troubleshoot critical alarms and even predict when issues might occur so they can deploy better remediation tactics to prevent costly downtime.
  • Remote monitoring is essential for managing distributed sites: IT teams now need to manage critical infrastructure on-premise, in the cloud and at the edge. Now that data resides in multiple places, it’s imperative for teams to be able to monitor anytime, and from anywhere. Tools that rely on cloud connectivity to enable remote monitoring will help operations teams access critical data – no matter how remote it may be.
  • Partner with a provider with the right expertise: It’s important to prioritize where your teams should be spending their time. To keep up with demand and manage costs effectively, consider empowering your operations teams to spend their time and resources monitoring critical environments and relying on partners to troubleshoot and remediate issues when they do occur. By doing so, operators can focus on growing their core business, not maintaining or optimizing IT infrastructure.

Actionable intelligence has become critical for helping operators monitor, manage and maintain evolving IT portfolios – especially as it becomes more distributed and complex. By simplifying the management of edge computing sites, businesses will be able to improve how they’re managing their environments to avoid costly business interruptions and ensure always-on availability to power digital business.

Russell Senesac is the Director of Data Center Strategy at Schneider Electric.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Courtesy of Park Place Technologies
Courtesy of Park Place Technologies

Immersion or Direct-to-Chip: A Comparison of the Most Common Liquid Cooling Technologies

Which liquid cooling technology is right for your organization? Chris Carreiro, Chief Technology Officer at Park Place Technologies, compares the most common liquid cooling technologies...

Shutterstock, courtesy of Blueprint Supply Chain
Source: Shutterstock, courtesy of Blueprint Supply Chain
Gorodenkoff/Shutterstock.com
Source: Gorodenkoff/Shutterstock.com
Courtesy of Stream Data Centers
Courtesy of Stream Data Centers

White Papers

Dcf Prolift Wp Cover 2021 06 07 12 34 02 212x300

The Secret to Building Data Centers Faster is Project Buffering

Aug. 8, 2022
To meet the needs of the rapidly expanding global colocation market, a significant amount of new infrastructure must be built quickly. Project buffering can eliminate many of ...