NEW YORK – What will data centers look like in 2030? That’s the question posed by the plenary session Wednesday at DCD Enterprise New York, a conference that brought together more than 1,500 attendees across two days at the Marriott Marquis.
The researcher gazing into the crystal ball was Suvojit Ghosh, Managing Director of the Computing Infrastructure Research Center (CIRC) at McMaster University in Hamilton, Ontario. The CIRC works with industry partners to develop advanced data center designs and management systems.
“As we move towards 2030, we will see lots more application-specific silicon being cooled by water,” said Ghosh. “As we approach 2030, we will need more data, and we will want it faster. We’ll have to make our data centers cheaper and more efficient and put these data centers closer to data.”
Part of the challenge in predicting future design trends is that there will be more diversity in the types of IT infrastructure facilities being deployed, including new types of deployments to support distributed edge computing workloads. Ghosh used the term “computing facility” rather than “data center” to help the audience grasp the breadth of sites and designs.
Shift to Specialized Chips
One of the biggest trends will be a shift towards specialized computing hardware, featuring custom chips like graphics processing units (GPUs), field programmable gate array (FPGAs) and application-specific integrated circuits (ASICs) in addition to the general purpose x86 CPUs that currently dominate the IT landscape.
“What we will probably see is a move to silicon that is optimized for workloads,” said Ghosh, who cited GPUs as being particularly cost-efficient in managing dense loads.
The development of powerful new hardware optimized for specific workloads has been a major trend in the data center sector, boosted by demand for data-crunching for artificial intelligence and other types of high-performance computing (HPC), as well as more powerful and efficient gear for traditional workloads.
This has created opportunities for both startups and existing players to chip away at Intel’s dominance in the data center. Beneficiaries include NVIDIA and AMD, as well as ARM specialist Ampere and a host of startups developing low-power chips to allow smartphones to run AI workloads on the device.
All these efforts are targeting the market position of Intel, whose CPUs power between 90 and 95 percent of corporate data centers, by some estimates. Intel isn’t resting on its laurels, as it demonstrated earlier this month at its Data-Centric Innovation event, where it introduced its new family of Agilex FPGA chips. Intel has also previewed a new line of GPUs, and has been hiring talent away from both NVIDIA and AMD in recent weeks.
Will Liquid Cooling Finally Have Its Moment?
This new generation of advanced hardware will broaden the use of high-density computing, according to Ghosh, who predicted that this will lead to a larger shift to liquid cooling technologies. The vast majority of data centers currently use air cooling, but densities have been gradually rising as more companies adopt AI-related data crunching.
Over the past decade, there have been numerous predictions of the imminent arrival of higher rack power densities, but extreme densities remain limited. Recent surveys suggest that the average rack density has risen slightly in recent years, from 3kW to 6kW a rack to the current 7.3 kW a rack, according an AFCOM survey. Ghosh believes the shift to specialty silicon will require new thinking on data center cooling.
“We’re definitely going to be using some form of liquid cooling,” said Ghosh. “There’s a common misperception that liquid cooling is expensive to install. When the density is right, liquid cooling is cheaper, and not by a little.”
Ghosh said the CIRC did an analysis of cooling costs, and found that the lowest cost approach was immersion cooling (their test used Submer technology), followed by liquid-to-the-chip and water-chilled rear-door cooling units, all of which displayed better economics than air cooling for dense workloads, Ghosh said.
Software Still Eating the World
But most of the biggest and most necessary advances will be in software to automate data center management, which is a particular focus for the CIRC and its industry research, which includes projects on smart monitoring, adaptive thermal management, and autonomous fault prediction and predictive maintenance.
This is particularly important with the rise of edge computing, which extends data processing and storage closer to the growing universe of devices and sensors at the edge of the network. The creation of thousands of distributed compute and storage nodes will also require new approaches, said Ghosh, both in the cost of the facilities and strategies to support them.
“If we really have mass-produced edge computing everywhere, there’s not enough facilities people to maintain them,” said Ghosh, who sees a strong need for software that can make control decisions in real time for edge to work.
That includes the use of sensors and artificial intelligence to detect anomalies. using sound and vibration profiles to detect when equipment is experiencing problems and requires attention. The McMaster lab is developing these technologies, as is LitBit, which has recently deployed AI agents to monitor generators at ROOT Data Centers in Montreal.
“In the next 10 years, we will see a situation where orchestration allows data centers to be operated without a maintenance person,” Ghosh predicted. “From a single pane of glass, you should be able to maintain 1,200 to 2,000 data centers.”