Executive Insights: Jack Pouchet of Vertiv 3Q 2018

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Jack Pouchet of Vertiv. […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Jack Pouchet of Vertiv.

JACK POUCHET, Vertiv

Jack Pouchet is vice president Market Development at Vertiv. Over the past 20 years, he has worked closely with major server manufacturers, large data center users, and leading mission-critical engineering firms to help define, architect and create opportunities for advanced power and cooling technologies that improve day-to-day operational efficiencies. Pouchet serves as a guest columnist for Environmental Leader, and has had articles in numerous trade journals, including ASHRAE, AFCOM, Mission Critical, Electrical Contractor, EE Times and Data Center Journal. He is also a frequent presenter at IT events and has played a key role in the development of Energy Logic 2.0, a vendor-neutral approach to reducing data center energy use. He is also a member of Green Grid.

Here’s the full text of Jack Pouchet’s insights from our Executive Roundtable:

Data Center Frontier: The largest hyperscale computing players are now seeking dozens of megawatts of power in wholesale data center deals. Is hyperscale now its own market segment, with different criteria and processes? Or can data center developers and the supply chain adapt to the full breadth of requirements in today’s market?

Jack Pouchet: There is the beginning of a bifurcation in the colocation marketspace at higher power levels, although the practice of a large, anchor tenant or single customer for the entire building has been common for many years. The change now is the scale of the power requirement for hyperscale clients often translates into a larger initial build and potentially a dedicated facility for a single client. The dedicated facility can not only meet the large power requirement but also the desire for segregated, secure, access-controlled facilities.

There still is plenty of room in the market for the more “traditional”’colocation facilities and we see plenty of new projects coming to market in the space. These facilities vary in size from a few megawatts upwards to 10 – 12MW with a campus environment to accommodate future growth. The vast majority of colocation clients do not buy space in 5 to 10MW blocks.

Data Center Frontier: Enterprises are deploying more workloads in cloud and hybrid cloud scenarios, but most still maintain legacy on-premises data centers. What challenges does this pose for data center management software (DCIM)? How have the software offerings evolved to meet these challenges?

Jack Pouchet: DCIM platforms have been proven to work extremely well across a single facility or campus environment. With careful planning DCIM can also be effective in monitoring, managing, and controlling company-owned IT assets in remote locations.

However, cloud-based workloads represent an interesting challenge with unique solutions depending upon IT asset ownership and the colocation billing model for off-premise instances. Pure cloud plays under a pay-per-use / instance / workload model are better aligned to cost accounting platforms from an end-user perspective whereas the cloud services provider will likely have their own DCIM instance running across their facilities.

Another way to look at this – use DCIM to manage your IT assets wherever they may be located. Understand the underlying cost differences between on-premise and colocation as the variation in cost, complexity, processing capacity, etc. will all come to play in determining where to run the next application.

What does the shape of data center innovation look like in 2018? It might be the ServerDome, says Jack Pouchet. (Photo: ServerDomes)

Data Center Frontier: There’s growing interest in the use of lithium-ion batteries in data center UPS systems. Are lithium-ion batteries likely to find traction in the data center market? If so, what are the key factors in the rate of adoption?

Jack Pouchet: Yes! Li-ion batteries are finding a home within the data center space. We are in the early days of adoption. Many large end users are running trials validating performance, developing their operational practices, evaluating facilities design, etc. as they prepare to deploy Li-ion battery plants across more sites.

Key factors driving Li-ion adoption include: cost parity over their life-cycle (not first cost), benign operating temperature range that eliminates the need to separate the cooling system with unique set points; higher energy density that results in a smaller form factor, reducing space allocation and freeing up additional space for client hardware (revenue); high cycle-count, discharge/recharge characteristics; and the significant potential for additional revenue streams.

On the revenue streams, there are UPS platforms capable of bi-directional operation. When coupled with Li-ion batteries this enables the operator to provide grid firming services such as frequency stability, VAR, peak shaving, and sustained energy supply for 10 minutes or more. This of course depends upon the energy supply agreements and regulations at your location. This case study demonstrates how Noriker Power leveraged the technology to provide consistent power to the grid and store energy generated during low demand period.

Data Center Frontier: Microsoft is deploying underwater data centers. Is this brilliant or crazy? Also, are there other examples of “outside the box” data center innovation that bear watching?

Jack Pouchet: Brilliant for sure, albeit less than practical for most businesses. As a big fan of innovation, this does help push the design envelope for data centers. I do wonder if the next step is to just bury the data center in the ground. In most of the world, once you are 6 to 10 feet below the surface, the ground temperature is a stable 50 to 55°F. At least with an underground facility you can easily provide access shafts and tunnels. I’m not talking a cave here. There are plenty of those. I am suggesting you bury the entire container, leaving just an access shaft for IT / Facilities personnel.

I recently toured a dome-based data center in Portland at the OHSU campus. Now that is not something you see every day. It fully embraces the laws of thermodynamics using them to its advantage for airflow management.

I do believe we will continue to see further innovation around the edge. These applications often require some clever thinking, whether it is for applying IT to a fish farm off of New Zealand, something as mundane as the POS and inventory systems at a gas station, or outfitting 20 to 40 compute nodes in a high-rise building. Perhaps some of the innovations we are seeing in large data centers will lead to new solutions for the edge.