Executive Insights: Dennis VanLith of Chatsworth Products

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Dennis VanLith of Chatsworth […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Dennis VanLith of Chatsworth Products.

DENNIS VANLITH, Chatsworth Products

Dennis VanLith is Sr. Director Global Product Management at Chatsworth Products (CPI). VanLith was one of the founders of CPI, having held several positions in the company during the past 32 years, including Regional Sales Manager, Product Manager and Director of Technical Services. He has been an active member of BICSI for 25 years maintaining the distinction of Registered Communications Distribution Designer (RCDD) for the past 21years. VanLith grew up in rural America and attended University of South Dakota. He is married with two grown children and three grandchildren. His hobbies include outdoor activities especially hiking and fishing.

Data Center Frontier:  The digital transformation of our society is creating enormous volumes of data, even ahead of the mainstream adoption of next-generation technologies like 5G wireless, the Internet of Things, augmented reality and autonomous vehicles. As this deluge of data looms on the horizon, what are the implications for the data center industry, and how and where all this data is stored and analyzed?

Dennis VanLith: As digital transformation unfolds, the impact to the data center will be significant. According to latest projection of storage accumulation, worldwide storage is expected to increase 800 percent from 2018 to 2025 (from 20 Zettabytes to over 160 Zettabytes), and both the edge (data centers located close to the user to minimize latency and improve quality of service), and the core data center will own over 65 percent of all data, with the remaining on personal endpoint devices.

This implies a significant continued need in increased storage capacity and throughput at the data center as well as the edge. Further, as we see the rollout of 5G across the industry, and with increased usage of Augmented Reality, Artificial Intelligence and Deep Neural Networks, we expect an even further emphasis on data being stored close to the endpoint. This will foster new usage models, such as connected vehicles that allow for real-time optimization of traffic flow or live updates of road hazard conditions. This will also transform behavior of the workforce, allowing people to work where they want and how they want, free from wires or the limitations of their endpoint device.

Data Center Frontier:  We’ve recently featured headlines about the adoption of artificial intelligence (AI) as a tool in data center management. How do you view the potential for AI to help optimize and automate data centers, and what are the pros and cons of this technology?

Dennis VanLith: Eventually, AI will enable and ingress in all aspects of the data center. The gold state would be data centers that flex to optimize all aspects of operations, with AI dynamically managing cooling, storage and networking to their optimum state. Computers are extremely good at calculating data and determining relationships.

In essence, deep learning can take lots of data and use it to determine what combination of relationships are optimal for a repeatable output. This is where we expect to see initial AI integration into the data center. Customers will take data from the data center – cooling, power, performance – and use deep learning to determine what combination of facility settings are optimized for their workloads.

Over time, we expect to see this level of automation (or AI) embedded real time in the data center. As with all aspects of the data center, security and redundancy will be critical, so we expect customers to place toes in the water and small incremental approaches versus jumping all-in. Eventually, the end state will be a DCIM system that is continually adjusted to optimize for the workload and the current environmental conditions.

The ability to remotely monitor and manage power data from the rack and device level has generated steady and increasing demand for intelligent power distribution units.
(Photo: Chatsworth Products)

Data Center Frontier:  For some time we have seen predictions that rack power density would begin to increase, prompting wider adoption of liquid cooling and other advanced cooling techniques. What’s the state of rack density in 2018, and is density trending higher at all?

Dennis VanLith: Rack density is growing slightly, but on average not to the levels served by direct liquid cooling. Rack densities average 8 kW to 16 kW, and there is a practical limitation in powering equipment beyond 16 kW per rack, if you are using standard 208/230 VAC power and redundant power feeds. It is possible to cool a 30 kW rack with air in a system that uses containment with good seal and low leakage, so traditional perimeter cooling and other air-based systems are still very good options. Direct liquid cooling, which is focused on 30 kW to 50 kW racks, is a good solution for high-density compute applications and some containerized solutions.

It is also important to recognize that compute power is increasing. As chip manufacturers add cores to processors (CPUs), we continue to get the benefit of Moore’s law driving continued increases in compute per watt. Further, the size of the CPU package continues to increase, so the heat flux due to the CPU is decreasing. Basically, servers are more power efficient and support higher utilization.

So rack densities will probably not climb significantly, but the amount of compute power (utilization) per rack will. The only wild card here is AI. Typically, the PCI cards required to drive AI run at 100% power when models are being trained. When AI takes off, expect extremely large and sustained loads on the data center and system that will increase the average workload. However, I still think direct liquid cooling will be extremely niche, and only for very specific use cases. With power densities averaging below 16 kW, there is no need for liquid cooling.

Data Center Frontier:  Data center companies are some of the largest consumers of renewable energy. Are these initiatives by large data center operators making clean energy more available and affordable? Will energy storage become part of the solution anytime soon?

Dennis VanLith: Renewable energy usage by the data center is driven mostly through purchase agreements with local power companies and direct partnerships. Typically, the data center operator builds renewable capacity and shares excess power to the grid, providing the power utility’s customers with clean, efficient energy.

From solar arrays to wind turbines, micro turbines and biogas fuel cells, the data center industry is leading the charge into the renewable future. The challenge with most renewables is timing. When the sun isn’t shining or the wind isn’t blowing, the data center still needs power. For these cases, energy storage allows for management of power through the generation-down cycle.

Many existing methods for storing energy – flywheel, battery, pumped hydro – allow for on-demand storing and conversion of energy. Data centers should invest in renewable energy storage as it drives development and adoption of a new type of backup power system that can also supplement the capacity of the power utility.

Check out our entire Data Center Frontier Executive Roundtable for 1Q 2018.