For the fourth installment of our Executive Roundtable for the Fourth Quarter of 2024, we inquired with our panel of distinguished data center industry leaders on how the massive build-out of AI infrastructure and its associated power demand is impacting prospects for future data center investment and planning.
On a live panel which touched on this topic at the inaugural DCF Trends Summit (Sept. 4-6), Bill Kleyman, a seasoned leader in the data center and cloud industry with over two decades of experience, who is currently the CEO and co-founder of data center AI acceleration firm Apolo, memorably opined that both "power and bravery" are essential to succeed in the age of AI.
Emphasizing the need for robust, well-connected networks to support AI deployment, especially in distributed environments, Kleyman at one point noted, “People want to do AI where the data lives, and the way you do that is through really well-connected networks."
Also interestingly, in terms of risks associated with the rapid outlay of AI infrastructure in data centers, Kleyman identified the inevitably rapid obsolescence of GPUs as a potential challenge. He said: "While we're all excited about AI and generative technologies, the industry is muted about the 18-month cycles of planned obsolescence tied to the largest CAPEX purchases in tech history. For example, a node of eight [Nvidia] H100 GPUs might cost $1.5 million today, but within two years, the newer B200 GPUs will render it less competitive."
This accelerated depreciation cycle, Kleyman said, demands turnover ratios five to six times faster than traditional models - making GPU rentals something of a financially precarious game. Kleyman further highlighted the ripple effect this dynamic could create across the industry's AI ecosystem, from suppliers like Dell and Supermicro to service providers like CoreWeave, which depend on fast capital recovery before the next generation of GPUs hits the market.
Our Executive Roundtable for the Fourth Quarter of 2024 includes the following seasoned data center industry leaders:
- Pat McGinn, Chief Operating Officer, CoolIT Systems
- Steve Zielke, Marketing Manager - Global Channel Distribution & Data Centers, Rehlko
- Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX
- Steven Carlini, Vice President of Innovation and Data Center, Schneider Electric
- Danielle Rossi, Global Director – Mission Critical Cooling, Trane
Now let's look into the fourth of the series for our Executive Roundtable for the Fourth Quarter of 2024.
Data Center Frontier: From your perspective, how is the massive build-out of AI infrastructure and its associated power demand impacting prospects for future data center investment and planning?
Pat McGinn, CoolIT: The rapid build-out of AI infrastructure is fundamentally reshaping the data center industry. AI workloads demand unprecedented power densities and advanced cooling solutions, placing immense pressure on existing infrastructure and driving a re-evaluation of investment priorities. From a planning perspective, operators are increasingly focusing on scalable, energy-efficient cooling technologies and renewable energy integration to meet these growing demands. However, the challenges extend beyond infrastructure: AI’s growth necessitates more robust site selection strategies, closer collaboration across supply chains, and future-proofing investments to adapt to the unpredictable pace of technological advancement.
As the industry invests in AI infrastructure, it must also navigate broader considerations—ensuring sustainability, aligning with regulatory requirements, and balancing short-term needs with long-term resilience. This period of transformation presents an opportunity for data centers to redefine their role as enablers of innovation while maintaining a steadfast commitment to environmental responsibility.
Steve Zielke, Rehlko: The onset of AI infrastructure is lengthening the data center investment and planning horizon.
The growth conversations need to start with the source of the industry’s energy as developers need to explore the build-out of varied types of infrastructure to modernize our power grid and create more capacity, and resiliency, within the system.
Aside from grid power, AI’s growth is extending the planning horizon for all data center components, forcing longer-term purchasing commitments and unique purchasing terms.
For example, we are beginning to establish a base of orders into 2028.
Phillip Marangella, EdgeConneX: The impact of AI on the data center space is unprecedented. While the Internet and Cloud eras were long, slow burns in demand and build-out, AI requires massive scaling in capacity almost instantly. Similarly, the density demands are also dramatically growing in a step-scale fashion, which is also challenging builders and operators of data centers. The capital demands coming from this scale are equally massive, and tens to hundreds of billions will be required to support AI in the coming years.
Steven Carlini, Schneider Electric: There has never been a time in history where investors have been aggressively backing new data center projects like today. Data center developers that have successfully negotiated access to low carbon grid power and have a resource-efficient data center design and detailed production plan have no trouble securing funding. However, availability of grid power is limiting new construction and forcing data center developers to seek creative alternatives, including adding primary power on-site. Solutions like natural gas turbines are a proven technology today, but may not be the most environmentally friendly. Fuel cells are a possibility but optimally require green hydrogen that is not available at scale and at a reasonable cost. Nuclear SMRs (small modular reactors) are carbon-free but are in the very beginning of their development and will need to go through regulatory approval.
Danielle Rossi, Trane: Right now, discussions about data centers and power availability go hand-in-hand. The increased usage of AI has grown the density of an already expanding power footprint. We are beginning to see the diversification of site selection, but also the increase of alternative power sources, such as several nuclear announcements this year. Cooling design for AI has been a very popular topic this year with liquid cooling fully entering active designs. The next few years will be a learning time to assess new chip densities and their compute power versus cooling methodologies and their associated heat rejection. Additionally, the creation of design and testing standards for these new builds will assist with future planning.
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.