Streamlining Data Center Tasks with Machine Learning

May 7, 2021
In this edition of Voices of the Industry, Service Express Chief Technology Officer Jake Blough shares how machine learning offers new opportunities for IT leaders to accomplish route tasks in the data center.

In this edition of Voices of the Industry, Service Express Chief Technology Officer Jake Blough shares how machine learning offers new opportunities for IT leaders to accomplish routine tasks in the data center.

Jake Blough, Chief Technology Officer, Service Express

Digital transformation in IT departments enables businesses to take advantage of artificial intelligence (AI) and machine learning (ML) to streamline tasks and improve operations in the data center. The key to understanding the distinction between AI and ML is to view automation as an umbrella with artificial intelligence, machine learning and deep learning as subsets of automation. AI is broadly defined as a technique that mimics human behavior. Machine learning uses data and algorithms to understand and improve from experience over time. Deep learning is a subset of machine learning where software trains itself to speak, recognize images and more.

According to Gartner, 37% of CIOs and technology leaders have adopted an AI strategy in the past four years. Data Center Knowledge states that IT leaders can use machine learning in the data center for capacity planning, risk analysis, efficiency studies, customer retention, budget impact and more. All IT leaders should consider automating their business processes using AI and ML techniques to redirect resources to vital areas of the organization.

Before outlining your machine learning project, verify if ML will achieve the results you need to meet your goals. To do so, you must understand how you’re going to utilize the data once the task is complete. Below you will find several tips to stay organized as you begin the planning process.

Source: Service Express

Avoid Future Headaches by Investing in the Pre-Planning Process
One of the critical lessons to learn is that a program meant to save time could create the opposite effect without careful pre-planning. Before working on any large-scale project, begin by:

  1. Clearly defining the problem that needs to be solved and specify why?
  2. Identifying which data is required and assess the organization of that data
  3. Determining the stakeholders early on to eliminate confusion
  4. Estimating the potential cost and time savings of the project

Once you’ve designed the framework for your project and earned buy-in from leadership, it’s time to determine if it will be completed in-house or outsourced. Time, patience, communication and organization will be vital to ensure a successful AI and machine learning process.

Five Ground Rules for a Successful AI & Machine Learning Process

#1: Your existing business process must be sound. Automating a flawed process only leads to making errors faster.

#2: Organize and define your data. AI and ML programs learn from historical data and perform actions based on the information you use. A lack of standardization will cause your model to skew, compounding inaccurate results over time.

#3: Rethink your end goal before automating. A common mistake when implementing an AI or ML process is that the end goal is not clearly defined, making it difficult to measure the initiative’s success.

#4: Determine if your systems are well integrated and where the result(s) will live once the program has completed its process.

#5: Beware of snake oil cures. AI and ML are hot topics, and hundreds of companies offer “AI” solutions that can be used with nonstandardized data. When building a new AI or ML model, the program uses data to solve a problem. Without the proper framework, the system will generate an inaccurate result and provide no real value. Always question any outside vendors who provide these solutions to ensure they are providing true machine learning solutions.

The following is an example of a large-scale ML project we successfully planned, created and executed at Service Express.

The Right Parts at the Right Time

As a third-party data center maintenance provider, having the correct parts accessible for equipment failures is critical. The reality is most equipment will experience issues which is why we created OnDeck Predictive Sparing. To support our customers’ data centers, we operate a complex international supply chain housing thousands of SKUs worldwide. Pinpointing the exact whereabouts of each spare data center part is a difficult task, creating a great opportunity for machine learning. In 2016, we developed the OnDeck algorithm using 15 years of historical service ticket and customer equipment profile data.

Source: Service Express

The OnDeck algorithm analyzes usage trends to predict a potential hardware failure before it occurs. Once identified, OnDeck selects the parts needed to support equipment before a system goes down. The correct parts are then shipped to the local Service Express office or stocking location – increasing the efficiency and speed of the local engineer by having the necessary parts on hand. Every 30 minutes, the database is updated, and ML analyzes the current data for new parts needs.

Transferring this process to ML saved us over 400 hours a month and increased OnDeck’s accuracy by 10%. Sharing the insights from this same data, customers can assess the reliability of their infrastructure by seeing which equipment is more and least likely to fail. OnDeck helps us ensure timely and accurate support for customers by automating the back-end parts sparing process.

With the success of OnDeck Predictive Sparing, we continue to look for more opportunities to build AI and ML solutions. The benefits from automated spare parts in the data center are just the beginning. Over the next ten years, AI will continue making its way into various industries, with a projected growth of $500 billion by 2024. In the next several years, IT departments will begin to see new technologies in the data center including augmented reality (AR). With the help of AR, IT leaders will have access to a data center “dashboard” or glasses where they can view real-time information about their data center. In addition to AR, there will be more prevalence of automation disease identification, predictive supply chains, cashless stores and much more. The use of AI and machine learning extends well beyond the data center and will continue to extend into our everyday lives.

Jake Blough is the Chief Technology Officer of Service Express. Learn more about Service Express’ OnDeck Predictive Sparing, which uses machine learning to ensure that the correct parts are accessible when their data center customers’s equipment fails.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

ZincFive
Source: ZincFive

Data Center Backup Power: Unlocking Shorter UPS Runtimes

Tod Higinbotham, COO of ZincFive, explores the race to reduce uninterruptible power supply (UPS) runtimes.

White Papers

Dcf Siemon Casestudy 2022 08 15 12 10 23 233x300

Wellstar Health Systems Delivers Successful Expansion with Siemon Advanced Data Center Solutions

July 15, 2022
Siemon explains how Wellstar Health Systems used advanced data center solutions to expand fiber densities within their leased colocation space.