Executive Insights: Amy Johnson from Vertiv

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Amy Johnson of Vertiv. […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Amy Johnson of Vertiv.

AMY JOHNSON, Vertiv

Amy Johnson is vice president of marketing and strategy for Americas at Vertiv. She was previously vice president of marketing for Europe, Middle East & Africa at Vertiv, and at Emerson Network Power. Amy held various planning, management and executive roles at Emerson Climate Technologies, including vice president of commercial air conditioning marketing. She had engineering roles at Hunter Fan Company and W.L. Gore & Associates. Amy has a BS in Chemical Engineering from Carnegie Mellon University, and an MBA from Vanderbilt University – Owen Graduate School of Management.

Here’s the full text of Amy Johnson’s insights from our Executive Roundtable:

Data Center Frontier: The data center industry has become disciplined in its construction and deployment of capital. In the past year, we’ve seen growing interest from capital providers eager to invest in the sector, while cloud providers are seeking larger and faster builds. What’s your take on how the industry is managing supply, demand and risk?

Amy Johnson: Several years ago, there were relatively well-defined lines between market segments, with each provider driving equipment specifications based on their unique requirements. Today those lines are blurring as providers look for ease of deployment, speed to market, and capacity management, and end users look for new alternatives to meet their increasingly diverse needs. Those are among the many reasons we’ve seen such rapid cloud and colocation growth. Third-party IT providers are the answer to the supply and demand question.

Risk is more interesting. Risk, in terms of both business continuity and data security, remains a real concern and is a primary reason that some companies are maintaining their own data centers. As cloud and colo become more accepted and establish a track record of high-level data security and availability, it stands to reason that companies will outsource more applications. Eventually, some types of companies may transition out of the data center business altogether – in effect, outsourcing their IT risk to expert providers.

As hyperscale cloud and colo facilities grow and become more virtualized, their risk tolerance grows as well. Their virtualized servers act as the primary layer of redundancy, because they can move loads around in the event of partial outages. The opposite is true at the edge, where the increasing number and criticality of edge computing sites put extreme pressure on network managers tasked with keeping everything up and running. Organizations with a large distributed network of edge facilities are ramping up their service programs.

It’s no surprise investors are flocking to the data center market. The volume of data we’re generating and collecting as a society is growing exponentially, and managing that data is a robust, growing industry all its own – as the seemingly overnight rise of cloud and colo providers has shown.

Data Center Frontier:  Advanced data centers were early adopters of the Internet of Things, using sensors to detect and manage temperature and humidity. What’s the current state of instrumentation in most data centers, and how is that influencing the software decisions that end users are making to manage their IT environments?

Amy Johnson:  The early adoption of sensors and environmental monitoring in the data center delivered a coarse overview of the data center’s thermal profile. It was better than nothing, but still part of an outdated shotgun approach to thermal management. It wasn’t uncommon for data center managers to make cooling decisions based on data from a single sensor in a room with hundreds of racks. This was consistent with the old approach of lining cooling units around the perimeter of the data center and blasting cold air indiscriminately into the room. Today’s thermal management strategies, sensors and systems bear little resemblance to those early deployments.

Now we often have dozens of sensors – environmental and otherwise – in a single rack, delivering detailed data on temperature, humidity, power usage and capacity and more. This sort of information can enable a much more elegant approach to data center management, but only if the data can be analyzed and acted upon quickly and intelligently. Information without context or direction has little value. Even well-intentioned automatic break/fix alerts aren’t sufficient. Our customers expect their data centers not to tell them something is broken, but to tell them what to do to fix it or, better yet, how to address it before it breaks.

That’s where the industry is today. Increased connectivity across the network has shifted sensors from being viewed in singular function to being connected to all the other sensors and devices in the data center. These devices contain intelligence and integrated technologies that allow them to gather insights and feed them into a broader management tool for analysis. Sensor data aggregated through a gateway allows business to capture and view the entire IT ecosystem and make intelligent – even predictive – operational decisions.

One final note: An important piece of this monitoring and management puzzle is usability. Customers demand easy-to-use tools for managing their data centers, and it’s up to us to develop tools that quickly collect and analyze data and deliver easy-to-use alerts and recommendations that simplify complex, difficult decisions.

Data Center Frontier:  Everyone is talking about the edge and the potential for a more distributed infrastructure, but there seem to be many definitions of its location. What do you see as the most promising opportunities in edge computing, in terms of location, form factor and timing?

Amy Johnson: We see the edge as anywhere and everywhere we find one or more IoT devices and/or sensors requiring what the military community used to call C3I – command, control, communications, and intelligence. In IT-speak this means the devices require not only a gateway, but some level of computational resources, storage, and the ability to manipulate data and provide services locally without the need to send 100 percent of the data upstream to a data center for processing.

We see numerous applications emerging, that will likely consolidate into 8 to 10 key archetypes over time. Let’s focus on two applications today. The first is fully integrated systems that incorporate local client and regulatory knowledge and skilled resources in a customized solution. We have had success with our systems integrators managing local and regional projects like this that scale to the low hundreds of installations. These range in size from 1 to 4 rack passive (no cooling) systems to row-based systems with 5 to 8 racks employing active cooling, fire suppression, local DCIM, remote monitoring, and UPS and power distribution, all the way up to aisle-based deployments with 14 to 20 racks. Examples include micro-datacenter infrastructure for IT equipment or communications closets for first responders, branch offices, small businesses, schools and universities, and industrial plants.

The second involves higher-volume applications for large-scale deployments within factory, industrial, process or government segments (think smart cities). In these cases, a single SKU configuration developed as a customized, application-specific solution, provides a better overall fit, improved pricing and, most importantly, the quality and repeatability only available from factory production. Examples include single-rack integrated point-of-presence systems for cities, hospitals, and large retail centers where a common design is replicated across numerous similar facilities across a wide region or, as in the case of large retail companies, globally.

When it comes to today’s edge of network, one thing is consistent across location, application or architecture – these deployments are becoming more and more mission critical and must be supported accordingly.

Data Center Frontier:  It appears that a dwindling number of users are building new corporate data centers, with most new workloads shifting to cloud or colo environments. What’s your take on this transition, and what will it mean for legacy corporate data centers? Do these facilities have a future as candidates for retrofit or repurposing?

Amy Johnson: With so much focus on colo and cloud, it’s easy to forget that the majority of data centers are enterprise. Corporate data centers are not going away. Recent IHS data shows about 40 percent of loads are colo and cloud, 60 percent are enterprise. That mix certainly will tilt more toward cloud and colo going forward, but there are some tasks that either cannot or will not be moved to third-party providers.

What we will see – and it’s already starting – is a refocus of those enterprise data centers to try to take advantage of the colo and cloud generation best practices to improve their efficiency, scalability and utilization rate. We anticipate widespread consolidation projects that take the opportunity to leverage those best practices to produce smaller, more efficient and more secure facilities with high levels of availability. Remember, these new or refurbished enterprise data centers will be housing mission-critical data that can’t be outsourced, so security and availability will be critical for enterprise, cloud and colocation facilities.

See the entire Data Center Frontier Executive Roundtable for insights from additional executives and topical stories on the latest data center trends.