How Powerful New AI Hardware Will Impact Data Center Design

June 20, 2019
The rise of AI has brought more hardware to accelerate specific tasks, Our DCF Executive Roundtable discuss whether powerful new AI hardware will bring changes in data center design and operations.

Today we conclude our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – Rick Crutchley from Iron Mountain, Intel Software’s Jeff Klaus, David Richards from Chatsworth Products, Randy Rowland of Cyxtera Technologies, EdgeConneX CMO Phillip Marangella, and James Leach of RagingWire Data Centers – discuss whether powerful new AI hardware will bring changes in data center design and operations.

The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.

Data Center Frontier: The rise of AI has boosted the use of new compute hardware to accelerate specific tasks, including GPUs, FPGAs, ASICs, TPUs and neural processing hardware as well as more powerful CPUs. How might the emergence of application-specific hardware impact data center design? What are the opportunities and challenges?

Randy Rowland, President of Data Center Services at Cyxtera

Randy Rowland: AI applications are skyrocketing, from navigation and autonomous cars to personalized digital assistants and risk analytics. One thing all AI applications have in common is their demand for exponentially more computational power and data storage. These high-performance compute resources require specialized power and cooling, making it impractical and costly for enterprises to build and maintain them in their own on-premise data centers. As a result, AI is rapidly becoming one of the biggest drivers for colocation deployments.

Enterprises turning to colocation should look for providers that have been certified to be AI-ready. For example, Cyxtera has partnered with NVIDIA to ensure customers that our data centers meet their unique power, cooling, raised floor and implementation requirements. Physical environment certification alone isn’t enough. In addition to giving enterprises access to the data center resources they need to quickly take advantage of AI, we’re also partnering with service provider partners who can help our customers plan, configure, deploy and operate this specialized environment and avoid many of the challenges that can delay implementation.

DAVID RICHARDS, Chatsworth Products Inc. (CPI)

David Richards: All of these hardware tasks, as well as additional architectures, are being spearheaded by the usual suspects (Microsoft, Google, NVIDIA, etc.). The race is on to be a dominant force in the market, and the rewards are going to be tremendous. There will be winners and losers. One example of application-specific hardware is the equipment used to mine crypto currency. These sites are radically different from traditional data centers, occupying two extremes. One is any storage shed anywhere with a large axial fan for ventilation of the space and bread racks to hold equipment. At the other extreme, there’s immersion cooling with 3M Novec in tubs that look nothing like current computer rooms.

But in the mainstream, it may be more practical to continue the practice of adding accelerators to existing compute for the incredible benefits of increased speed in compute cycles. The actual workloads are still basically spread across available resources, but the time to process is significantly reduced. That approach can be racked using current data center design, but may push the rack footprint density up, requiring better airflow management, then auxiliary indirect liquid cooling or even direct liquid cooling in extreme applications.

RICK CRUTCHLEY of Iron Mountain.

Rick Crutchley: The more powerful the hardware and the more components per chassis, the higher the power density. This affects data center design from a power distribution and cooling standpoint, as traditional solutions are not sufficient when it comes to protecting real high-performance computing. These customers also tend to use every drop of power allocated to their environments.

To be able to address these new requirements, colo providers must develop an agile design methodology that allows for efficient and cost-effective changes to the base design as well as the implementation of custom cooling solutions for a single customer.

JAMES LEACH, RagingWire Data Centers

James Leach: A common thread that ties together these new specialized computing chips is power and cooling. These chips will expand the use of high density computing which then requires high density power and cooling. Our latest data center designs support 36 megawatts spread across four or six vaults in a facility with excellent economics, efficiency, and availability. With these new chips we may need a 36 megawatt vault with liquid cooled cabinets or immersion cooling.

The best data center designs will allow for a diverse deployment of specialized computing hardware for new high performance computing applications combined with the “rack and stack” servers to support established applications.

Jeff Klaus, General Manager of Intel Data Center Management Solutions

Jeff Klaus: I would put those more in the category of flexible hardware that can be purposed for a specific application, and it’s fantastic stuff that will always be sought for a strategic edge, but it’s going to continually be chased by other technology and software designs that intend on changing the use case. For example edge use cases and 5G could change the way we look at collecting and processing data that would then require more specialized hardware outside the datacenter (as opposed to inside the datacenter).

So really it’s best to say regardless we’re going to have a lot more data in the future, and we’ll need to count on Moore’s Law to give us tools to address it at a reasonable price point.

PHILLIP MARANGELLA, EdgeConneX

Phillip Marangella: We are already seeing the result of these trends as customers are asking for both more proximate deployments at the Edge, but also more dense deployments requiring greater power requirements per rack that can easily exceed 15kW to 20kW to even more than 30+kW/rack in some instances.

Providers that can give customers the assurances and peace of mind that their deployment can scale in power both per rack and for future capacity growth will be a compelling differentiator and an opportunity for certain providers.

RECAP: Check out the full series of Data Center Frontier’s Executive Roundtable. 

Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below:

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Andrius Kaziliunas/Shutterstock.com
Source: Andrius Kaziliunas/Shutterstock.com

Cabling Systems: On the Forefront of the Low Voltage Telecom Revolution

Jose Reyes, Vice President & Co-Owner of Cabling Systems INC, explores the history of low voltage telecom cabling systems.

White Papers

Get the full report

Top 40 Data Center KPIs

July 7, 2022
A new white paper from Sunbird outlines 40 of the most critical data center KPIs that managers should monitor. The report covers KPIs from nine areas: capacity, cost, asset, change...