Executive Insights: Jack Pouchet from Vertiv

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Jack Pouchet of Vertiv. […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Jack Pouchet of Vertiv.

JACK POUCHET, Vertiv

Jack Pouchet is vice president Market Development at Vertiv. Over the past 20 years, he has worked closely with major server manufacturers, large data center users, and leading mission-critical engineering firms to help define, architect and create opportunities for advanced power and cooling technologies that improve day-to-day operational efficiencies. Pouchet serves as a guest columnist for Environmental Leader, and has had articles in numerous trade journals, including ASHRAE, AFCOM, Mission Critical, Electrical Contractor, EE Times and Data Center Journal. He is also a frequent presenter at IT events and has played a key role in the development of Energy Logic 2.0, a vendor-neutral approach to reducing data center energy use. He is also a member of Green Grid.

Here’s the full text of Jack Pouchet’s insights from our Executive Roundtable:

Data Center Frontier:  The digital transformation of our society is creating enormous volumes of data, even ahead of the mainstream adoption of next-generation technologies like 5G wireless, the Internet of Things, augmented reality and autonomous vehicles. As this deluge of data looms on the horizon, what are the implications for the data center industry, and how and where all this data is stored and analyzed?

Jack Pouchet: The impending deployment of 5G combined with the rapid growth of IoT and mobile technologies – not to mention smart vehicles – combine to place inordinate demands on our core data centers, cloud and hyperscale data centers, and the network infrastructure bringing them together. These and other technologies (virtual reality, on-demand anywhere) will accelerate the demand for compute, storage and analytics at or near the edge.

Several architectures are likely to emerge based upon the volume and nature of the data and the application (life critical, video, social, business, etc.). Those models will ensure data is captured and acted upon at the right point in the system to deliver the value, service, and SLAs required for the application. Of course data privacy and cybersecurity issues will need to be addressed.

Data Center Frontier:  We’ve recently featured headlines about the adoption of artificial intelligence (AI) as a tool in data center management. How do you view the potential for AI to help optimize and automate data centers, and what are the pros and cons of this technology?

Jack Pouchet: True artificial intelligence is a long way from reality, and the cons are all too easy to define. Just think HAL 9000 and Skynet. But we need not even look that far as there are numerous real-world examples almost on a weekly basis of automation going awry. A fine example is when semi-autonomous vehicles with “safety” features designed to automatically apply the brakes under certain conditions misinterpret data, fail to understand there is no danger, and cause accidents by braking unsafely.

On the “pro” side: Machine learning already is beginning to be adopted within advanced data centers as a means to improve the control of cooling systems and act as the vision, hearing, touch, and smell of a super-facilities-manager. These systems keep track of the known operating points and constantly look for the unknown or any deviation from the norm in an effort to optimize performance. This type of system will emerge to improve the day-to-day performance of the data center while reducing the need for unnecessary preventive maintenance. The data center machine learning functions (or AI in the distant future) can identify potential faults far enough in advance for a technician to schedule a service visit at a time best suited to the operation of the data center.[clickToTweet tweet=”Jack Pouchet of Vertiv: Machine learning already is beginning to be adopted within advanced data centers.” quote=”Jack Pouchet of Vertiv: Machine learning already is beginning to be adopted within advanced data centers.”]

Data Center Frontier:  For some time we have seen predictions that rack power density would begin to increase, prompting wider adoption of liquid cooling and other advanced cooling techniques. What’s the state of rack density in 2018, and is density trending higher at all?

Jack Pouchet: Rack densities are top of mind for many within the industry, and for the most part we are seeing modest increases – from an average of about 6kW/rack to perhaps 8kW/rack. The colocation industry provides us with a fine barometer for current density and future trends, and large colocation facilities are being built out to accommodate somewhere in the range of 8 to perhaps 10kW/rack with provisions for up to 30kW/rack in select positions or zones. These facilities typically are air-cooled.

We are seeing some enterprise facilities that are continuing to consolidate hardware, virtualize applications, and migrate to much higher density racks of 20 to 35kW. These racks often are cooled with either pumped refrigerant or rear-door heat exchangers. The existing CW or DX cooling system also is upgraded for efficiency and optimized controls as the balance of IT racks are still in the 8 to 10kW range. With all of these new compute capabilities and data analytics comes a requirement for ever more storage, and the good news for data center operators is storage racks are relatively low-density – making them easier to cool and manage. It’s not uncommon to see facilities with 60 percent of the racks outfitted with storage.

Bottom line: Higher densities are coming – and with them alternative cooling technologies – but it will be a gradual evolution. Sudden, drastic change would require fundamental changes to a data center’s form factor, and in most cases that is not going to happen.

Data Center Frontier:  Data center companies are some of the largest consumers of renewable energy. Are these initiatives by large data center operators making clean energy more available and affordable? Will energy storage become part of the solution anytime soon?

Jack Pouchet: Yes! If the forecasts are correct and data centers – inclusive of the entire ecosystem from edge to 5G nodes, regional, core, enterprise, hyperscale and colocation – grow at the projected rate, the net energy demands could increase 3X over the next decade. I’ve heard some industry pundits suggest the increase will be on the order of 5X.

Regardless of the size of the total demand – an ever-increasing proportion of the energy used by data centers is coming from renewables in one form or another. We expect this trend to continue.
Further, we believe in the not-too-distant future, data centers – perhaps starting with hyperscale/colocation but equally as likely with edge, especially in remote/emerging markets – will be powered by some form of on- or near-site cogeneration, renewable, or a hybrid combination along with energy storage.

The data center industry already is providing the utility grid with potentially 2 percent capacity for 10-minute windows. That’s based on (1) the estimate that data centers are responsible for about 2 percent of total U.S. energy consumption and (2) data centers have UPS systems typically capable of 10 minutes of battery power. That’s a sizable installed base of grid-connected energy storage. In the future, data centers will become energy centers with a financial stake in the stability and availability of the local grid

Check out our entire Data Center Frontier Executive Roundtable for 1Q 2018.