DCF Trends Summit: Top 5 Data Center Trends to Watch for 2025

Nov. 27, 2024
This article fully recounts discussions from the closing panel, as titled above, of the inaugural Data Center Frontier Trends Summit, held Sept. 4-6, 2024 in Reston, Virginia.

The closing panel session of this year's inaugural Data Center Frontier Trends Summit (Sept. 4-6), entitled "Top 5 Data Center Trends to Watch for 2025," began with an introduction from DCF Editor in Chief Matt Vincent, the voice behind the event’s transitions. The founder of Data Center Frontier, Rich Miller, then took the stage and expressed his gratitude for the attendees' steadfastness. Acknowledging their commitment, he pointed out that this last session was a chance to identify the most dedicated DCF Trends participants, those seeking to extract as much knowledge as possible from the event.

Reflecting on his professional journey, Miller recounted how Data Center Frontier started in 2015 with a simple WordPress blog. "Over time, it grew to a prominent platform for discussions on data centers, leading to an event like this with 300 attendees and meaningful conversations," he said. Miller acknowledged the complexity of modern media and how his role as a journalist evolved into that of an accidental entrepreneur. He noted, "This journey wouldn't have been possible without the strong support of Endeavor Business Media, the organization behind the event, for its capabilities in making such a large production possible."

Miller opened the session by discussing the importance of trends and predictions in the data center industry. He highlighted Data Center Frontier's annual trend forecasting, which starts each January with the brand's traditional "8 Trends" article to shape industry discussions for the coming year. Each year's trends are followed by a year-end scorecard to assess how accurate these predictions were, with an annual assortment of "hits," "misses," and usually a few "too soon" trends that are still developing. Miller teased one of the recurring trends—the rise of liquid cooling—saying that its moment had finally arrived, signaling it as a key topic for discussion in the panel.

The Panel

Along with DCF's Miller, the session panelists included Erica Thomas, Executive Director of the Liquid Cooling Coalition (LCC). Thomas brought to the panel a wealth of experience from her previous roles at the Green Grid, the Information Technology Council, the U.S. Diplomatic Corps, and environmental agencies. Also on the panel was Bill Kleyman, a seasoned leader in the data center and cloud industry with over two decades of experience. Kleyman is currently the CEO of Apolo. Also on the panel, Chris Downie, CEO of Flexential, is an industry figure known for his veteran expertise in data center operations and hybrid IT, who previously led Telx Holdings. Lastly, Dan Crosby, CEO and founder of Legend Energy Advisors, was introduced, bringing over 15 years of experience in North American energy and environmental markets, with a unique perspective as a founder frustrated by traditional energy brokerage models.

The panel discussion was organized to explore one of five key trending areas of the data center industry, encompassing topics of network infrastructure, energy, artificial intelligence, cooling, and sustainability as relates to the power grid and the industry at large. In the format of the discussion, as presented below, each panelist was asked one of five questions by Data Center Frontier focused on a particular industry trend, allowing them to provide their insights before opening the floor to responses from co-panelists.

DCF Trends Question 1:  Can you describe the main implications and challenges for data center operators in AI's growing demand for high-capacity networks, which has made dark fiber and broader network capacity into highly valuable assets?

The panel's first question was posed to Flexential CEO Chris Downie, who responded by acknowledging the challenges of predicting future trends in the industry, citing how the past few years had defied his expectations. He explained that, for the first 16 years of his career, things were relatively predictable, but recent years had thrown those assumptions into question, especially with AI driving unprecedented growth.

Downie emphasized that, while land, power, and data center space have long been critical resources, he now believes that the network itself is emerging as the next valuable asset facing potential scarcity. He explained that fiber networks have historically had ample capacity, but over the last five years, the demand for data and network infrastructure has grown exponentially. This demand, fueled by AI and the broader data center ecosystem, has increased fiber utilization significantly.

Highlighting the importance of fiber as an enabling technology for the operation of data centers, Downie pointed out that the network allows data centers to federate and connect across locations. However, he noted that network resources are not limitless, and the capacity available today may not be sufficient to meet future demands. He specifically referred to dark fiber, which is essential for creating these connections and supporting the large-scale operations of hyperscalers and AI systems. As demand for data grows, the network may become a more sought-after resource, potentially leading to increased competition for dark fiber.

Downie further explained that the pricing dynamics for network capacity have changed over time. Historically, the volume of data increased while prices per unit of network capacity dropped. However, if the demand for data continues to rise at the predicted rate, Downie suggested that both the volume and the price of network resources could increase, leading to a reevaluation of network infrastructure's value. This shift could signal a new phase where network resources are treated with a higher degree of appreciation due to their crucial role in supporting AI and data center growth.

DCF Trends Question 1: Panel Responses

In response to Downie's remarks about the growing importance of network capacity, Bill Kleyman emphasized the significance of Chris’s point, particularly regarding the need for robust, well-connected networks to support AI deployment, especially in distributed environments. 

“People want to do AI where the data lives, and the way you do that is through really well-connected networks," said Kleyman. In further agreement with Downie, Kleyman noted how the rise of inference at the edge, distributed computing, and the expansion of secondary and tertiary markets all require a solid network architecture.

In rejoinder, Downie mentioned recent developments involving Microsoft and Lumen, suggesting that a deal where Microsoft prepaid approximately $5 billion and secured prioritized access to Lumen’s network resources, might be an early indicator of the growing significance of network resources for AI. “That may be just the first step of what I was mentioning may happen here in the near future,” he concluded.

DCF Trends Question 2:  Do you see a wave of data center facilities and campuses turning to natural gas generation in wake of the AI tsunami, how fast and far will that wave travel, and to what degree does NG then become an entrenched part of the data center energy equation beyond the time it takes to get to SMRs?

The second question of the session was initially posed to Legend Energy Advisors' Founder and CEO Dan Crosby, asking about the potential wave of data centers turning to natural gas generation in response to the growing demand driven by AI. The conversation shifted towards the impact of this demand on energy infrastructure, specifically regarding the potential role of natural gas in supporting AI-driven data center operations.  

In his response, Crosby shared his experience touring a large industrial facility, noting how many of the challenges facing the data center space today are not newm but have become more concentrated. He explained, "The oil field has been waiting on transformers, setting up microgrids, doing distributed power, mobile power... The constraints and challenges that we're seeing are not new." Crosby also emphasized how data centers now require more flexible and scalable energy solutions, particularly given the dynamic demands of AI workloads. He compared the unpredictable nature of AI models to a fracking (hydraulic fracturing) operation, where power requirements can fluctuate rapidly, requiring mobile or distributed generation.

Crosby argued that natural gas is the most viable solution for meeting the current energy needs of data centers, especially given the shortage of alternative power sources. “There's a ton of gas... industrials getting paid to burn gas,” he pointed out, highlighting the abundance of natural gas and the shifting perspectives on its use in the data center sector. He also addressed the ongoing challenges with utilities and grid infrastructure, explaining how many data centers now rely on mobile generation solutions due to long delays in securing power from traditional utility sources. Crosby acknowledged that while small modular reactors (SMRs) are on the horizon, they remain a long-term solution, estimating they are still 10 to 15 years away from widespread deployment.

DCF Trends Question 2: Panel Responses

Reflecting on Crosby's insights, Kleyman emphasized that "power and bravery" are essential to succeed in the age of AI. He echoed Crosby’s sentiment that the power sector is ripe for innovation, particularly in secondary markets and microgrids. Kleyman expressed hope for quicker advancements in SMRs, citing a recent conversation with Dr. Rian Bahran, who currently serves as Assistant Director for Nuclear Technology and Strategy in The White House Office of Science & Technology Policy (OSTP), who discussed bipartisan government support for nuclear technologies. Kleyman noted that while nuclear energy is a promising future solution, much education is needed to address concerns, such as the safety and storage of uranium. “There’s a lot of education that needs to happen,” he said, emphasizing that the industry needs to be more proactive in tackling these challenges.

Also in response to Crosby, Flexential CEO Downie then shared his perspective on the role of power generation in his company's data center platform, which serves a wide range of customers outside of hyperscalers. He discussed the complexities of educating customers, particularly when it comes to the acceptability of natural gas and nuclear energy. “What do they think about it? Are they comfortable with gas, are they comfortable with nuclear?” Downie asked, underscoring the importance of navigating customer preferences while meeting the growing demand for energy. Downie acknowledged how the broader adoption of AI and hyperscalers may leave customers with few energy options, but that data centers still need to be mindful of their clients' concerns regarding the sources of power.

DCF Trends Question 3:  Will the uptake and deployment of the long-awaited new NVIDIA chip give the data center liquid cooling industry the proverbial rocket fuel for a mainstream shift that the technology has had trouble sustaining, making 2025 the definitive LC tipping point?

The session's third question was posed to Liquid Cooling Coalition (LCC) Executive Director Erica Thomas to discuss the potential impact of new Nvidia chips on the liquid cooling sector. Thomas said she saw this development as a key opportunity for the liquid cooling industry to gain momentum. She acknowledged the growing demand for energy-efficient data center cooling solutions, which, combined with the introduction of advanced chips, has indeed led to a tipping point for liquid cooling in the data center sector.

Thomas took time to describe the Liquid Cooling Coalition as a new initiative she helped establish after leaving her role at the Green Grid and joining her new organization, CO2EFFICIENT. She said the idea for the coalition emerged approximately year ago when several companies she had worked with at the Green Grid approached her with the request to form an advocacy group focused on promoting the adoption of liquid cooling. To gauge interest, she organized an initial meeting, which was attended by 35 companies and over 70 people. This strong turnout signaled significant industry interest, prompting the creation of the coalition. “We got started, and I like to brag that we had eight paying members before we even had a website,” Thomas said, illustrating the level of enthusiasm for the initiative.

The mission of the LCC, Thomas explained, is to advocate for, educate about, and coordinate efforts across the industry to promote liquid cooling as a solution for data centers. She said one of the coalition's key messages is that liquid cooling is no longer just a concept discussed in labs but is now commercially available and feasible. She pointed out that liquid cooling can significantly address power constraints, which were a major topic at the conference. "One of the greatest ways to address that is... reduce the amount of power that you're using," Thomas said. Discussing how instead of simply demanding more electricity, data centers can use their existing power more effectively and efficiently, she cited studies showing that liquid cooling can reduce energy usage by about 40% on average.

Beyond energy savings, liquid cooling offers several other advantages, Thomas noted. It is more energy-efficient than air cooling because liquid is a better medium for heat transfer. It also requires less water, offers heat reuse opportunities, lowers total operating costs, enhances performance, and saves space. “There's just a number of reasons why,” she said, explaining why liquid cooling should be a priority for data centers. However, the introduction of Nvidia's Blackwell architecture, which requires liquid cooling to operate, marks a significant turning point. “The shift... was that if you want to be in the AI game and part of supporting the AI compute efforts, you have to use liquid cooling,” Thomas added. This shift reflects the increasing compute densities necessary for AI workloads, which air cooling can no longer handle effectively.

Thomas elaborated on the rapid growth in power and density requirements for data centers. A few years ago, a large data center typically operated in the 5 to 10 megawatt range, with racks consuming single-digit kilowatts. Today, data centers under construction are in the 30 to 60 megawatt range, with many development projects exceeding 100 megawatts. Rack power consumption has similarly increased, with the average rack now consuming around 12 kilowatts. Thomas noted that some data centers are even developing racks with up to 100 kilowatts of power, and possibly even one megawatt racks. “2025 will be the age of data center liquid cooling, because you can't do it without it,” Thomas concluded, emphasizing that the increasing compute densities will leave facilities with no choice but to adopt liquid cooling.

DCF Trends Question 3: Panel Responses

Following Thomas’s remarks, Legend EA's Crosby raised an interesting point about the relationship between efficiency and demand. “If you make the chip use less power, you're just going to use more chips,” he remarked, referencing a concept known as Javon's Paradox—where improvements in efficiency lead to increased demand. This, he argued, would only further drive the need for liquid cooling as compute workloads continue to ramp up.

Crosby also highlighted the broader challenge of managing heat in data centers. "A computer basically turns electricity into heat," he said, noting that air cooling is ineffective at dissipating that heat unless it is used to heat a building. Liquid cooling, by contrast, is much more efficient at transferring heat, enabling denser computing operations without the drawbacks of air cooling. The introduction of Nvidia’s liquid-cooled chips, the panelists agreed, represents a game-changer in the industry. "We're talking about gigawatts of heat," Crosby emphasized, underlining the scale of the challenges ahead and the essential role of liquid cooling in addressing them.

LCC's Thomas also highlighted the potential for liquid cooling to concentrate and reuse heat, which could unlock significant opportunities. “Liquid enables you to concentrate the heat and then reuse the heat,” she said, pointing out that in Europe, some countries have already implemented regulations that promote heat reuse. "This heat, now concentrated, can be repurposed for applications in agriculture and aquaculture, representing a great business opportunity for those who can figure out innovative ways to reuse it," said Thomas. “There’s a lot of value, I mean gigawatts of value that’ll be added," affirmed Crosby.

“I think your coalition is in this industry at literally the perfect time,” said Kleyman. He shared his experience with retrofitting one client's data centers, transforming them from operating at 10 to 15 kilowatts per rack to 70 kilowatts. "This was achieved with a relatively simple solution: attaching a rear door heat exchanger (effectively a car radiator) to the back of the racks," he said. This modification significantly increased the density and efficiency of the data center. Kleyman further noted, “This facility literally became an AI powerhouse because they could support applications that much more effectively.” 

However, Kleyman also acknowledged that as the next phase of the client's build develops—an AI farm with 6,000 GPUs—the next step will involve direct to chip liquid cooling, emphasizing that while liquid cooling is more efficient, it also presents a learning curve. “The inlet temperature going in one of those rear door heating exchangers might be like 76 degrees or something, but the outlet is like 72, 71 degrees," he said. This means that operators can remove unnecessary cooling units within the server rooms, reducing equipment and increasing overall efficiency.

Kleyman continued, “We have reached the limitation of how much air can dissipate heat from mechanical equipment.” He emphasized that this is simply a part of technological evolution. “The sooner you accept that liquid cooling is going to be a part of your data center ecosystem, the sooner you can start to explore that,” he concluded, reinforcing the inevitability of liquid cooling's role in data centers. He noted that traditional colocation providers could easily begin implementing liquid cooling with existing technologies, even at lower densities.

Downie's rejoinder acknowledged the costs and challenges of liquid cooling adoption. He raised concerns about the affordability of liquid cooling for smaller enterprises. “The end customers that can do that are... the large, you know, hyperscale, the Nvidia’s, Intel’s, the Coreweave's of the world,” he noted, pointing out that these large companies have the capital to deploy liquid cooling at scale.

In contrast, smaller enterprises may not be able to justify the expense, especially when air-cooled environments are still sufficient for many non-AI workloads. “It’s not something that we build for them. We'll get vendors to do that for them, but it's pretty dang expensive relative to, you know, an air-cooled environment,” Downie admitted. While he acknowledged that liquid cooling is essential for high-performance computing and AI workloads, he cautioned against making broad assumptions about its immediate need across all sectors.

DCF Founder Rich Miller's response also touched on the evolving nature of data center infrastructure. He acknowledged that many multitenant providers are adapting to accommodate both traditional air-cooled and high-density liquid-cooled workloads. “Some of the multitenant providers are also thinking this way. They’re introducing a high density cooling products, but at the same time a lot of their customers... don’t necessarily need that, at least not yet,” Miller said. This hybrid approach, blending conventional and specialized cooling methods within the same facility, reflects how data center design is changing to meet diverse customer needs, he stated.

Miller also discussed the concept of 1-megawatt racks, which was a recurring a topic of discussion at the conference. He expressed skepticism that such racks would be common by 2025, noting that while companies like Nvidia are exploring these possibilities in their labs, the current standard rack form factor (7 feet, 42U) is unlikely to accommodate such high-power configurations.

“The rack form factor is going to survive and be the same [but] we’re going to see form factors evolve as...the situation demands,” Miller predicted, highlighting the ongoing innovation in data center design. He speculated that some facilities will likely specialize in AI workloads, while others will need to support a broader range of tenants and workloads. He further projected that, as liquid cooling technologies advance, data centers will likely stratify into different tiers, with some focusing exclusively on high-density, AI-driven operations.

DCF Trends Question 4:  To what degree will the infrastructure for GPUs as a Service and other AI and supercomputing demands have to live in specialized builds in larger data centers, and how compelling are the retrofit and modular options and opportunities in this area?

The session's fourth question went to Apolo co-founder Bill Kleyman, who interpreted the query as essentially asking about the path forward for AI infrastructure. In answering, Kleyman acknowledged the need to balance risks and opportunities in addressing the growing demands of AI infrastructure.  

He shared the journey of his company, Apolo US, founded in 2019 under a different name and later pivoting to serve data centers and telcos. This shift was fueled by a customer the size of AT&T and Verizon combined, seeking to compete with Amazon. By January 2023, Apollo had evolved into the industry's sole multi-tenant AI and MLOps layer specifically designed for data centers and telcos.

Kleyman highlighted a critical insight from his company's early strategy: "We realized a massive risk in this industry with the GPU rental gamble... It quickly became a race to the bottom—charging less per GPU per hour—which we opted out of because we had more to compete on." Kleyman said this pivot allowed Apoo to focus on the inference market rather than the volatile model-building space.

Risks In AI Infrastructure

Kleyman identified the rapid obsolescence of GPUs as a pressing challenge: "While we're all excited about AI and generative technologies, the industry is muted about the 18-month cycles of planned obsolescence tied to the largest CAPEX purchases in tech history. For example, a node of eight H100 GPUs might cost $1.5 million today, but within two years, the newer B200 GPUs will render it less competitive, requiring liquid cooling and cutting costs dramatically." 

This accelerated depreciation cycle, he noted, demands turnover ratios five to six times faster than traditional models, making GPU rentals a financially precarious game. He emphasized the ripple effects this creates across the ecosystem, from suppliers like Dell and Supermicro to service providers like CoreWeave, which depend on fast capital recovery before the next generation of GPUs hits the market. 

"The constant need to upgrade leaves very little room for financial recovery or profit," Kleyman warned, describing it as a ticking time bomb for those relying solely on GPU rental models.

Opportunities in Retrofitting and Inference

Amid these challenges, Kleyman underscored the potential of retrofitting existing infrastructure to support AI workloads. He shared an example of Scott Data Center, a traditional colocation provider, which adapted its architecture to accommodate AI needs. "Scott Data Center installed motor rear-door heat exchangers and a 50-plus unit rack capable of handling six DGX units, retrofitting the facility to support liquid cooling. This innovation is driving a resurgence in raised floor designs," he explained. 

This retrofitting allowed Scott Data Center to evolve from its 10-15 kW legacy design into a cutting-edge facility, housing up to 6,000 GPUs for AI inference workloads by 2026. This effort caters to a growing market of enterprises that prefer to keep their operations on-premises rather than shifting to the cloud. Kleyman further highlighted Apollo's unique focus on enterprise use cases for AI inference rather than speculative model-building efforts. 

He shared real-world examples of transformative efficiencies: "One customer reduced the time spent creating a pricing spreadsheet from 10 hours to 1.5 hours using a private model template, achieving 80% completion and 99% accuracy. Another used Apollo for rapid document analysis via RAG (retrieval-augmented generation)." These practical applications demonstrate how retrofitted facilities and tailored AI solutions can unlock tangible benefits for enterprises.

Kleyman reiterated the hybrid future of data centers, where traditional workloads coexist with AI-driven demands: "Not everything is going to be AI. Exchange servers and SQL databases aren’t going anywhere, but every data center will evolve to support AI in some form." 

AI Infrastructure Risk and Opportunity

Overall, Kleyman underscored the duality of opportunity and caution in the rapidly evolving landscape of AI infrastructure, focusing on the evolving role of AI in data centers, the challenges of rapid technological advancement, and the risks inherent in current business models. In analyzing how infrastructure is adapting to meet enterprise demands and how risk is being redistributed across the ecosystem, he emphasized the importance of retrofitting existing data centers to support localized AI workloads. 

Kleyman highlighted the value of meeting enterprise clients’ needs:  "To retrofit an ecosystem is critical, especially when enterprise users don’t want to go to the cloud. They want to run AI where their data lives. The predominant issue has been facilities that can’t handle future workloads or software layers that can’t support multi-tenancy."

He contrasted this approach with the "GPU rental gamble," explaining how Apolo focused on delivering a “full services layer” akin to Amazon’s cloud model. This strategy allowed them to charge higher rates—closer to $6 or $7 per hour—while providing a more comprehensive solution for enterprise clients.

Managing Risk in a Rapidly Evolving Market

A key theme of Kleyman's was the accelerating depreciation cycles of GPUs, a challenge highlighted by his reaction to a recent announcement heard on a separate DCF Trends Summit panel: "Digital Realty's Chris Sharp mentioned 33,000 H100 GPUs. My immediate reaction was, ‘That’s the coolest thing I’ve ever seen.’ But then I thought, ‘How is he managing that risk?’" 

Kleyman described a conversation with a CEO of a major colocation provider, who planned for a six-to-seven-year depreciation cycle with a matching customer lease. While this approach mitigates risk on paper, Kleyman raised a critical concern: "What happens if their business model is completely shattered in 24 months? Losing 5 to 10 megawatts from a customer is substantially more painful than replacing a few hundred kilowatts."

This anecdote underscores the high stakes in the AI data center market, where rapid innovation can destabilize even well-laid plans.

In response to such challenges, Kleyman advocated for targeting inference workloads rather than speculative model-building, noting that the former offers more consistent returns: "On our Apolo ecosystem, we have real enterprise customers running real use cases that show ROI in just a few months. Unlike Microsoft, which has spent billions on H100 GPUs without showing revenue, we focus on low-hanging fruit that simplifies business operations."

He cited examples such as reducing a client’s spreadsheet preparation time from 10 hours to 1.5 hours using private AI models. These practical, enterprise-ready solutions stand in contrast to the hype surrounding generative AI.

DCF Trends Question 4: Panel Responses

Drawing comparisons to the dot-com boom and bust era of the late 90s and early 00s, DCF Founder Rich Miller reflected on how risk dynamics have changed: He noted, "In the dot-com era, startups sold stock to the public, and when demand didn’t materialize, the startups collapsed, leaving data centers scrambling to fill space. Today, the risk is distributed differently. Hyperscale customers like Microsoft, Google, and Amazon lease space pre-construction, shifting risk away from developers."

Miller also emphasized how private capital now governs much of the sector, with large investors like Blackstone driving expansion. He said this model reduces risk for developers, but places pressure on hyperscalers to justify massive capacity investments. 

Balancing optimism with caution, Miller concluded with an acknowledgment of the uncertainties ahead. He noted the industry's current "AI digestion era," where stakeholders are pausing to evaluate ROI: "Investors are saying, ‘Cool, cool, cool—pump the brakes on more H100 purchases. Show us what you can actually do with this.’" 

Despite the risks, Miller emphasized his optimism about the sector’s long-term potential. Kleyman added a final perspective: "Risk is relative. People talk about bubbles, but this is still very early. Enterprises are actively finding ways to use AI, and while we may face challenges, the opportunities are enormous."

DCF Trends Question 5:  How best can the data center industry and the hyperscalers help utilities to speed grid development to meet the accelerated pace of internet and AI infrastructure demand?

In a compelling conclusion to the inaugural Data Center Frontier Trend Summit, in response to the panel's last question, DCF Founder Rich Miller addressed the critical intersection of data centers, sustainability, and the renewable energy transition. As the global demand for digital infrastructure surges, data centers and hyperscalers are poised to play a pivotal role in shaping a greener future.  

Miller contended that data centers have always been pioneers of the renewable energy transition and sustainability."The digital infrastructure sector, including hyperscale users and data center developers, will become the most important players in transitioning to a clean economy," he added. However, Miller also highlighted the challenges facing the energy sector, which has struggled to keep pace with the exponential growth of the internet and renewable energy adoption.  

He pointed to a historical disconnect in the renewable energy gap. Ergo, while the science of renewable energy has been clear for decades, progress remains slow. "We’re nine years past the Paris Agreement, yet we’re at a critical point," Miller said, citing the dual pressures of inadequate renewable energy supplies and the boom in AI demand.  

Natural gas, Miller acknowledged, will serve as a data center energy stop-gap for the next few years. However, he stressed that the data center sector has long been a driving force behind renewable energy adoption. He credited early initiatives like the Greenpeace campaign against "dirty coal," which pushed companies like Facebook and Google to innovate with power purchase agreements (PPAs). These agreements ensured long-term support for renewable projects, enabling developers to finance solar and wind farms.  

Hyperscalers Leading the Charge  

Miller emphasized that hyperscalers remain the largest buyers of renewable energy globally, purchasing gigawatts annually. "Going back to 2018, data center players in Northern Virginia were sending letters to Dominion Energy saying, ‘Give us more renewable energy. We will buy it.’ That’s still the case today,"* he said.  

Miller noted that the sector has invested heavily in innovation, exploring solutions like battery storage, geothermal energy, and advanced energy management systems. While some technologies will take time to scale, their efforts are critical to bridging the renewable energy gap.  

Meanwhile, according to Miller, hyperscalers possess three key advantages:  

  1. Innovation: Research labs are developing cutting-edge solutions for energy challenges. 
  2. Capital: These companies have the financial resources to fund transformative energy projects. "If the world really wants AI, they have the money to make it happen," Miller stated.
  3. Market Position: Their dominance gives them leverage to influence energy policies and practices.  

Miller also acknowledged the role of major data center developers backed by wealthy investment firms like Blackstone, noting their capacity to contribute meaningfully to sustainability initiatives.  

A Call to Action  

Miller concluded by emphasizing the urgency of collaboration between data centers, hyperscalers, and the energy sector: "If we’re going to have a clean future, the hyperscale players in the data center sector will need to use a patchwork of technologies and strategies to make it happen—and drag the power grid and the energy sector along with them."  

As the DCF Trends Summit wrapped up, the audience was left with a clear message: the path to a sustainable future will rely heavily on the ingenuity, resources, and influence of the data center industry. 

In materials for the 2024 Data Center Frontier Trends Summit, an infographic outlining the closing session was promised. The following images are offered as fulfillment of that commitment. 

Fittingly enough for a topic so concerned with AI technologies and their impact, we fed the 'Top 5 Data Center Trends for 2025' questions into OpenAI's GPT-4, along with certain cues asking for not just one, but two, infographics. They are as presented here:

These wonky, weirdly technical renderings of data center industry trends as AI-generated infographics may in fact embody the burgeoning state of the greater AI infrastructure itself, as a concrete work-in-progress in a state of rapid self-manifestation, emerging from a mirage of seemingly myriad possibilities.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

 

At Data Center Frontier, we not only talk the industry talk, we walk the industry walk. In that spirit, DCF Staff members may occasionally employ and humanize AI-assisted articles and content. The text of this article was created with help from Open AI's GPT-4.

 

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

About the Author

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

CoolIT Systems
Source: CoolIT Systems

Selecting the Right Coolant Distribution Unit for Your AI Data Center

Ian Reynolds, Senior Project Engineer with CoolIT Systems, outlines considerations for selecting the best CDU for your needs.

White Papers

Dcf Imdcwp Cover 2023 01 11 17 19 43

Infrastructure Planning Report - EMEA - Frankfurt

Jan. 11, 2023
In this white paper, Iron Mountain Data Centers provides an overview of the German colocation market. It explores strengths and weaknesses of the market as well as the latest ...