Moves by Vertiv and Schneider Electric Typify Data Centers' AI Power and Cooling Lift for NVIDIA GPUs

May 1, 2024
Recent power moves by market leaders Vertiv and Schneider Electric in the direction of NVIDIA and AI typify a larger trend conspicuously rippling through the data center industry.

Fresh off news in March that the company has joined the NVIDIA Partner Network (NPN), bringing its expertise in high-density data center power and cooling infrastructure to better support adoption of accelerated computing and artificial intelligence (AI) workloads, Vertiv last month unveiled a new line of end-to-end AI power and cooling solutions. 

Intended to simplify the selection and deployment of data center infrastructure within North America, the newly launched Vertiv 360AI platform is designed to accelerate high performance compute (HPC) and AI adoption through pre-engineered infrastructure solutions, digitized management, and end-to-end service. The company claims use of the 360AI platform may yield 2x faster deployments compared to typical infrastructure installations.

The new offering of high-density data center infrastructure solutions is designed to support the higher cooling and power requirements of the accelerated computing IT stack. Intended as a streamlined path to power and cool AI workloads of all sizes, the Vertiv 360AI systems include power, cooling, enclosures, end-to-end lifecycle services, and digitized management. 

Vertiv contends the new platform enables customers to realize faster deployment times by eliminating design cycles and minimizing the likelihood of supply issues during installation and commissioning. 

The company notes the 360AI platform also allows for flexibility and customization, reusing or retrofitting existing cooling systems when possible to decrease deployment cost and scope.

Scoping 360AI 

The Vertiv 360AI platform includes solutions ranging from support of AI test pilot and edge inferencing to full-scale AI data centers. Initial Vertiv 360AI models can power and cool up to 100 kW per rack of GPUs running AI workloads. 

Flexible cooling strategies for Vertiv 360AI include air-to-liquid, liquid-to-air, liquid-to-liquid, and liquid-to-refrigerant models. 

Vertiv 360AI also includes prefabricated modular solutions, including IT modules, cooling modules, and power modules that may be combined and scaled as building blocks to provide capacity for current and future infrastructure needs. 

For customers without chilled water available, Vertiv 360AI can also include indoor split chillers to provide liquid cooling with refrigerant-based heat rejection.

Vertiv's standard portfolio offering of cooling solutions for AI and HPC is also available, including rear door heat exchangers, direct-to-chip liquid cooling and coolant distribution units (CDUs), high-capacity free-cooling chillers, and large capacity chilled water-based prefabricated modular data center solutions. 

Available power solutions include uninterruptible power supply (UPS) with lithium-ion batteries, busbar, switchgear, and battery energy storage systems (BESS) to support microgrid applications.

“Vertiv’s goal is to prepare our customers for successful AI adoption,” said Brad Wilson, Vice President of Technology at Vertiv. “Our Vertiv 360AI portfolio offers a simplified path for scalable AI infrastructure, addressing the new challenges that data center professionals and systems integrators are trying to solve now, including retrofits of air-cooled edge and enterprise data centers, and build-outs of hyperscale green fields.”

 

NPN Positioning

As alluded to above,  Vertiv is now a "Solution Advisor: Consultant" partner in the NPN [NVIDIA Partner Network], a global program for technology partners who offer solutions built on, or powered by, NVIDIA technologies.

Among the program's cohort of software vendors, cloud service providers, and system integrators, Vertiv said it joined the network to offer its expertise in addressing the unique infrastructure challenges presented by accelerated computing. 

“Vertiv has collaborated with NVIDIA in research, development and engineering for multiple years, designing innovative products and solutions that support the deployment of NVIDIA technologies globally,” said Giordano (Gio) Albertazzi, CEO at Vertiv. 

The NPN program provides access to a range of benefits, including technical support and training and collaboration opportunities, helping partners deliver innovative solutions to their customers. 

Vertiv's CEO Albertazzi added:

“We have combined our leadership in power and cooling solutions with NVIDIA’s cutting-edge platforms to help meet the demands of the most compute-intensive applications and support the deployment of AI infrastructure across the globe. Now, we are collaborating to build state-of-the-art liquid cooling solutions for next-gen NVIDIA accelerated data centres powered by the GB200 NVL72 systems.” 

On April 24, while updating its Full Year 2024 Outlook in its financial results for the first quarter ended March 31, Vertiv reported 60% organic orders growth with 8% net sales growth for Q1.

 

Schneider Electric Moves In On NVIDIA Designs for AI Data Centers

For its part in the data center industry's arguable and perceptible "NVIDIA Spring" of 2024, Schneider Electric in March announced its collaboration with NVIDIA on designs for AI data centers. 

In the collaboration, Schneider is leveraging its expertise in data center infrastructure and NVIDIA's advanced AI technologies to introduce the first publicly available AI data center reference designs. Schneider said the collaboration will optimize data center infrastructure while paving the way for advancements in edge artificial intelligence (AI) and digital twin technologies.

Offering a framework for implementing NVIDIA's accelerated computing platform within data centers, the partners said the newly launched reference designs redefine benchmarks for AI deployment and operation within data center ecosystems, with a view to optimized factors of performance, scalability, and energy efficiency.

"We're unlocking the future of AI for organizations," said Pankaj Sharma, Executive Vice President, Secure Power Division & Data Center Business, Schneider Electric. "By combining our expertise in data center solutions with NVIDIA's leadership in AI technologies, we're helping organizations to overcome data center infrastructure limitations and unlock the full potential of AI. Our collaboration with NVIDIA paves the way for a more efficient, sustainable, and transformative future, powered by AI."  

SE - NVIDIA Collaboration Phase One

In the first phase of the collaboration, Schneider Electric will introduce data center reference designs tailored for NVIDIA accelerated computing clusters and built for applications such as data processing, engineering simulation, electronic design automation, computer-aided drug design, and generative AI. 

Special focus will be given enabling high-power distribution, liquid-cooling systems, and controls designed to ensure simple commissioning and reliable operations for the extreme-density cluster. 

Through the collaboration, Schneider Electric said it aims to provide data center owners and operators with the tools and resources necessary to seamlessly integrate new and evolving AI solutions into their infrastructure, enhancing deployment efficiency, and ensuring reliable life-cycle operation.

Addressing the evolving demands of AI workloads, Schneider said the reference designs will offer a framework for implementing NVIDIA's accelerated computing platform within data centers, while optimizing performance, scalability, and overall sustainability. 

Partners, engineers, and data center leaders can employ the reference designs for existing data center rooms that must support new deployments of high-density AI servers and new data center builds that are fully optimized for a liquid-cooled AI cluster.

"Through our collaboration with Schneider Electric, we're providing AI data center reference designs using next-generation NVIDIA accelerated computing technologies," remarked Ian Buck, Vice President of Hyperscale and HPC at NVIDIA. "This provides organizations with the necessary infrastructure to tap into the potential of AI, driving innovation and digital transformation across industries."

NVIDIA Roadmap Includes Digital Twin

In addition to the data center reference designs, under the new partnership, AVEVA, a subsidiary of Schneider Electric, will connect its digital twin platform to NVIDIA Omniverse, delivering a unified environment for virtual simulation and collaboration. 

The partners said the integration will enable seamless collaboration between designers, engineers, and other data center stakeholders, accelerating the design and deployment of complex systems, while helping tp reduce time-to-market and costs.

"NVIDIA technologies enhance AVEVA's capabilities in creating a realistic and immersive collaboration experience underpinned by the rich data and capabilities of the AVEVA intelligent digital twin," said Caspar Herzberg, CEO of AVEVA

In the collaboration with NVIDIA, Schneider Electric also plans to explore new use cases and applications across industries.

Herzberg added, "Together, we are creating a fully simulated industrial virtual reality where you can simulate processes, model outcomes, and effect change in reality. This merging of digital intelligence and real-world outcomes has the potential to transform how industries can operate more safely, more efficiently and more sustainably."

 

Industry Front Runners Lead the Way

The recent moves by market leaders Vertiv and Schneider Electric in the direction of NVIDIA and AI typify a larger trend conspicuously rippling across the data center industry.

JetCool

This week, JetCool announced the availability of its liquid cooling module for NVIDIA’s H100 SXM and PCIe GPUs. 

Unveiled initially at the 2023 Supercomputing Conference, JetCool's SmartPlate technology has undergone extensive performance validation with NVIDIA’s H100 GPUs. 

JetCool contends the technology surpasses conventional air cooling methods, delivering a performance improvement of 82%, while reducing total power consumption in AI clusters by 15%. 

The SmartPlate employs JetCool’s targeted microconvective liquid cooling technology. It is billed as a breakthrough in single-phase, direct-to-chip liquid cooling for meeting computing speed and sustainability demands as compared to other LC methods. 
The JetCool technology not only optimizes rack density and power efficiency, but also promotes heat reuse by effectively utilizing warm inlet coolants with temperatures up to 60°C under full workload conditions.

“Our NVIDIA H100 SmartPlate is not just a product, it’s a paradigm shift in data center liquid cooling technology,” contends Dr. Bernie Malouin, CEO and Founder of JetCool. “This breakthrough signifies JetCool’s commitment to pushing the boundaries of what's possible in liquid cooling performance and sustainability to ensure our customers can meet the increasing demands of high-performance computing environments, without compromising environmental values.”

Per JetCool, key advantages of the company's NVIDIA H100 SmartPlate include:

  • Reduces thermal resistance of the H100 GPU from 0.122 C/W to an impressive 0.021 C/W.
  • Eliminates costly refrigeration cycles and maintains peak performance even in environments with elevated coolant temperatures.
  • Lowers GPU temperatures by 35°C using just 2% of the cooled power.
  • Reduces AI cluster power consumption by 15%, while lowering noise levels up to 13dB.

JetCool’s SmartPlates for NVIDIA H100 PCIe and NVIDIA H100 SXM5 are currently available.

Hyve Solutions

Also this week, Hyve Solutions Corporation, a wholly owned subsidiary of TD SYNNEX Corporation (NYSE: SNX) and a provider of hyperscale digital infrastructures, announced it has been named a design partner for the NVIDIA HGX platform. The company said the designation marks a significant milestone in Hyve’s focus on accelerating AI in the data center and at the edge.

As noted by Hyve, as a high-performance computing solution designed for AI, machine learning (ML), and deep learning (DL) workloads, NVIDIA HGX offers a standardized architecture, enables seamless communication between GPUs across servers or clusters, and facilitates scalability without extensive architectural modifications. This flexibility allows for customization to meet diverse application needs while optimizing performance and efficiency.

“NVIDIA’s HGX platform empowers organizations worldwide with powerful performance and scalability,” said Steve Ichinaga, President, Hyve Solutions. “Designation as a design partner underscores our commitment to meet the evolving needs of our customers through the swift and efficient creation of next-generation AI solutions that drive transformation and shape the future of computing.”

As an NVIDIA HGX design partner, Hyve offers a wide range of NVIDIA AI solutions that are optimized for the NVIDIA H100, H200, and Blackwell GPUs, while incorporating foundational industry technologies such as liquid cooling and DC busbar power architectures. 

In addition, the company says it possesses the expertise along with the capacity infrastructure and power required to deliver rack-scale and fully tested cluster solutions.

“AI is a key driver of growth for organizations worldwide looking to develop their infrastructures and high-performance computing workloads,” said Craig Wiener, Director of Strategic Partners at NVIDIA. “As an NVIDIA HGX designer partner, Hyve Solutions is empowered to help design and manufacturing teams across organizations overcome their most complex datacenter infrastructure challenges.”

As a fully vertical integrated original design manufacturer with US-based SMT operations, Hyve emphasized how it leverages its extensive design and manufacturing expertise, experience and global footprint to deploy AI data center architectures quickly and efficiently.

Accelsius

Meanwhile also in April, Accelsius unveiled its NeuCool solution, comprised of a patented, two-phase direct-to-chip system that cools beyond 1500 W per server chip, ensuring headroom for generational performance gains driven by AI data center demand and high-performance chips.

 

The company noted that its NeuCool system is compatible with the existing data center and edge infrastructure and leverages Accelsius’ North American supply chain and production capabilities to scale quickly with demand.

Billed as a water-free, closed-loop system readily adaptable to existing facilities, Accelsius contends its NeuCool two-phase direct-to-chip in-rack technology offers an estimated 50% savings in energy costs with up to 80% reduction in CO2 emissions.

In the system, highly-engineered vaporators (i.e. cold plates) are mounted directly to targeted hot-spot chips. Rather than water, the company says "an eco-friendly, safe" dielectric refrigerant flows through the vaporators, where it nucleates into a vapor. That vapor then travels through an industrial manifold to an intelligent Platform Control Unit (iPCU), condensing back into a liquid in a closed-loop system and returning to the vaporator for additional cooling.

The NeuCool architecture is driven by the iPCU (a highly-engineered CDU) with enterprise-grade redundancies, industrial components and extensive safety testing to maximize system uptime. The company emphasizes that NeuCool’s modular design enables seamless integration into existing data center facilities and at the edge via water-cooled doors, dry coolers or other heat rejection methods.

The Accelsius NeuCool solution is also supported by professional services spanning system architecture, integration, deployment, and maintenance functions through an expanding network of approved service providers with liquid cooling expertise. The NeuCool platform was showcased at this year's Data Center World (Apr. 15-18) in Washington, D.C.

Carrier Ventures

Also in April, Carrier Global Corporation (NYSE: CARR), a global leader in intelligent climate and energy solutions, announced that its venture group, Carrier Ventures, is leading an investment and technology partnership with Strategic Thermal Labs, LLC (STL). Through this investment, Carrier will leverage STL's innovative technology to bring forth a new liquid cooling solution for data centers. 

Carrier reckons the liquid cooling market is expected to approach $3-5 billion by 2028, driven by greenfield AI data center builds that will involve much higher rack power densities. The company estimates that liquid cooling accounts for roughly 2% of overall cooling solutions within data centers today, and is expected to increase to almost ~30% by 2028.

"Direct-to-chip liquid cooling is significantly more efficient than traditional cooling methods and critical to data centers of the future as AI and other technologies continue to fuel the global demand for high-density computing," said Ajay Agrawal, Senior Vice President, Global Services, Business Development & Chief Strategy Officer, Carrier. "Together with STL, we will help to significantly reduce data center energy consumption, while also providing customers with a highly scalable solution."

As part of the financial investment, Carrier will receive a board seat in STL, along with exclusivity for a highly differentiated solution.

The investment in STL follows Carrier's acquisition of data center infrastructure management (DCIM) provider Nlyte in 2021. Carrier's Nlyte DCIM tools share detailed information between HVAC equipment, power systems and servers/workloads that run within data centers, enabling better transparency and control of the infrastructure for improved uptime.

"Data center power consumption is surging at unprecedented rates with a significant portion of that power increase belonging to inefficient cooling technology," noted Austin Shelnutt, President, STL. 

"The world needs highly innovative cooling solutions that can enable the next generation of computer infrastructure while using far less energy in the process. Coupling the legacy of Carrier's energy efficient infrastructure solutions with STL's unique depth and breadth of expertise in data center cooling is highly synergistic. We believe this partnership will be incredibly powerful in the fight to mitigate global energy consumption."

Through Carrier Ventures, Carrier invests in sustainable innovations and disruptive technologies enabling the future of building and cold chain management. STL is an Austin, TX-area heat transfer research and development company known for its expertise in data center liquid cooling, who has provided the world's largest server manufacturers and data center operators with practical insights into both contemporary and future data center cooling technologies.

Carrier Ventures is an important accelerator of Carrier's 2030 Environmental, Social & Governance (ESG) Goal to help customers avoid more than 1 gigaton of greenhouse gas emissions by 2030. 

 

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sponsored Recommendations

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

3 Steps to Calculate Total Enterprise IT Energy Consumption Using DCIM

Embark on a simplified journey to measure and reduce the environmental impact of your enterprise IT with our practical guide, outlining a straightforward 3-step framework using...

Sashkin/Shutterstock.com

Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Dcf Vertiv Wp Cover 2022 08 15 13 47 38

Vertiv’s Approach to Environmental, Social and Governance Matters

Aug. 16, 2022
Vertiv presents their approach to environmental, social, and corporate governance (ESG) including what they’re doing for the planet, for their people, and for their neighbors....