The Data Center of 2030 Will be Faster, Cooler and Automated

April 12, 2019
As we approach 2030, data centers will feature ” lots more application-specific silicon being cooled by water,” according to Suvojit Ghosh, the plenary speaker at DCD Enterprise New York.

NEW YORK – What will data centers look like in 2030? That’s the question posed by the plenary session Wednesday at DCD Enterprise New York, a conference that brought together more than 1,500 attendees across two days at the Marriott Marquis.

The researcher gazing into the crystal ball was Suvojit Ghosh, Managing Director of the Computing Infrastructure Research Center (CIRC) at McMaster University in Hamilton, Ontario. The CIRC works with industry partners to develop advanced data center designs and management systems.

“As we move towards 2030, we will see lots more application-specific silicon being cooled by water,” said Ghosh. “As we approach 2030, we will need more data, and we will want it faster. We’ll have to make our data centers cheaper and more efficient and put these data centers closer to data.”

Part of the challenge in predicting future design trends is that there will be more diversity in the types of IT infrastructure facilities being deployed, including new types of deployments to support distributed edge computing workloads. Ghosh used the term “computing facility” rather than “data center” to help the audience grasp the breadth of sites and designs.

Shift to Specialized Chips

One of the biggest trends will be a shift towards specialized computing hardware, featuring custom chips like graphics processing units (GPUs), field programmable gate array (FPGAs) and application-specific integrated circuits (ASICs) in addition to the general purpose x86 CPUs that currently dominate the IT landscape.

“What we will probably see is a move to silicon that is optimized for workloads,” said Ghosh, who cited GPUs as being particularly cost-efficient in managing dense loads.

The development of powerful new hardware optimized for specific workloads has been a major trend in the data center sector, boosted by demand for data-crunching for artificial intelligence and other types of high-performance computing (HPC), as well as more powerful and efficient gear for traditional workloads.

This has created opportunities for both startups and existing players to chip away at Intel’s dominance in the data center. Beneficiaries include NVIDIA and AMD, as well as ARM specialist Ampere and a host of startups developing low-power chips to allow smartphones to run AI workloads on the device.

All these efforts are targeting the market position of Intel, whose CPUs power between 90 and 95 percent of corporate data centers, by some estimates. Intel isn’t resting on its laurels, as it demonstrated earlier this month at its Data-Centric Innovation event, where it introduced its new family of Agilex FPGA chips. Intel has also previewed a new line of GPUs, and has been hiring talent away from both NVIDIA and AMD in recent weeks.

Will Liquid Cooling Finally Have Its Moment?

This new generation of advanced hardware will broaden the use of high-density computing, according to Ghosh, who predicted that this will lead to a larger shift to liquid cooling technologies.  The vast majority of data centers currently use air cooling, but densities have been gradually rising as more companies adopt AI-related data crunching.

Over the past decade, there have been numerous predictions of the imminent arrival of higher rack power densities, but extreme densities remain limited. Recent surveys suggest that the average rack density has risen slightly in recent years, from 3kW to 6kW a rack to the current 7.3 kW a rack, according an AFCOM survey. Ghosh believes the shift to specialty silicon will require new thinking on data center cooling.

“We’re definitely going to be using some form of liquid cooling,” said Ghosh. “There’s a common misperception that liquid cooling is expensive to install. When the density is right, liquid cooling is cheaper, and not by a little.”

Ghosh said the CIRC did an analysis of cooling costs, and found that the lowest cost approach was immersion cooling (their test used Submer technology), followed by liquid-to-the-chip and water-chilled rear-door cooling units, all of which displayed better economics than air cooling for dense workloads, Ghosh said.

Software Still Eating the World

But most of the biggest and most necessary advances will be in software to automate data center management, which is a particular focus for the CIRC and its industry research, which includes projects on smart monitoring, adaptive thermal management, and autonomous fault prediction and predictive maintenance.

This is particularly important with the rise of edge computing, which extends data processing and storage closer to the growing universe of devices and sensors at the edge of the network. The creation of thousands of distributed compute and storage nodes will also require new approaches, said Ghosh, both in the cost of the facilities and strategies to support them.

“If we really have mass-produced edge computing everywhere, there’s not enough facilities people to maintain them,” said Ghosh, who sees a strong need for software that can make control decisions in real time for edge to work.

That includes the use of sensors and artificial intelligence to detect anomalies. using sound and vibration profiles to detect when equipment is experiencing problems and requires attention. The McMaster lab is developing these technologies, as is LitBit, which has recently deployed AI agents to monitor generators at ROOT Data Centers in Montreal.

“In the next 10 years, we will see a situation where orchestration allows data centers to be operated without a maintenance person,” Ghosh predicted. “From a single pane of glass, you should be able to maintain 1,200 to 2,000 data centers.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

iStock Photo, courtesy of EdgeConneX
Source: iStock Photo, courtesy of EdgeConneX

Opportunity Awaits: Asia-Pacific's Surge in Data Center Development and Innovation

Phillip Marangella, Chief Marketing and Product Officer at EdgeConneX, explores data center development opportunities in the Asia-Pacific region.

White Papers

Tmg Core Dcfwp Cover 2022 01 31 16 22 16 232x300

Approaching the Heat Limit with Liquid Immersion Technology

Feb. 3, 2022
Liquid cooling offers a less costly and more sustainable alternative to dealing with heat and recent advancements are driving change across the industry. TMGcore explores some...