The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from David Knapp of Chatsworth Products.
David Knapp is a Product Marketing Manager at Chatsworth Products, a leading manufacturer in voice, data and security products and technology service solutions. He has more than 18 years of experience in the telecommunications industry with CPI as a product application expert and technical communicator in the roles of Technical Support, Technical Writer and Product Marketing Manager. He is currently focusing on data center, enterprise networking and power management solutions.
Here’s the full text of David Knapp’s insights from our Executive Roundtable:
Data Center Frontier: Discussion of industry trends is dominated by the rise of the cloud computing model. How has this cloud-driven disruption impacted the broader data center industry? What are the pros and cons?
David Knapp: The main impact is that there are fewer organizations building and operating data centers. Cloud providers build data centers, and they lease wholesale space from multi-tenant data center operators. Unlike service providers, they can site their facilities to maximize benefits from weather conditions for “free cooling” to access the lowest cost of electricity and real estate. Additionally, they have demonstrated that a large-scale campus approach works, and some have built on-site clean power generation demonstrating that a commercial data center can provide a portion of the power that it consumes.
The upside: Cloud providers have developed highly efficient data center, white space and compute designs, rethinking both the facility and the equipment. They challenge internal teams to optimize and have created forums to share, test and document best practices. Everything cloud must scale quickly. I think the pro is that they are challenging the industry to find solutions to match this new paradigm. As a result, everybody is innovating.
The downside: Unforeseen and unintentional consequences from putting all the eggs in one basket. Of course, this is not literally the case, but software, compute and network are inextricable. Cloud depends on reliable networks to reach other IT services and network users, and a single outage can affect many individuals and organizations. With a few exceptions: so far, so good. However, there are still 1.5 billion or so folks that would like to be connected to the Internet, and more IoT devices are anticipated than anyone can actually count. Will this model continue to be the most efficient way to reach and support all of these projected users and devices?
Similarly, we are collecting and creating data at an astonishing rate, but it may still be unclear where the data is—who decides what should get stored, how it will be stored, how it will be used and who actually owns it.
Data Center Frontier: There has been a lot of merger and acquisition (M&A) activity recently in the data center industry. How are these M&A deals influencing the development of the data center industry? Are we likely to see more M&A activity?
David Knapp: Organizations are making adjustments to focus on their core competencies and shedding technologies that are less profitable than they may have been in the past. On the facilities and hardware side, best practices have reduced the amount of power and cooling equipment needed per floor space. Compute can endure much tougher environmental conditions. Volume servers keep getting more powerful. Virtualization and software defined networks get much more out of each device. On the software side, startups are developing many new, useful applications that can integrate into and improve the feature set of “IT as a service.”
Because of this service boom, enterprises are not building as many data centers, but rather looking to colocation or the cloud. This results in manufacturers improving and consolidating hardware solutions with acquisitions or divestiture; mult-itenant operators are expanding with acquisition to accommodate the demand enterprises have for “IT as a service”; cloud providers are improving services with acquisition and integration of software startups.
Yes, M&A activity will continue. Although, recent reporting may indicate M&A has slowed in the first quarter of 2017, it still valued in the tens of billions monthly, and acquisitions are paying well. It is a good time to be developing a service, providing a service or offering data center space.
Data Center Frontier: The market for tools to monitor, manage and automate data centers continues to evolve. What are the significant trends in this ecosystem and how effectively are customers using these tools?
David Knapp: Site monitoring has three basic elements: the sensor, network and software. Deploying more sensors closer to equipment is the first trend. This gives data center managers a more granular view of the site. For example, power monitoring at each device outlet and environmental monitoring at every rack. When the software models this additional data, it is possible to see specific rack- and device-level detail on power consumption and environmental conditions. This helps data center managers make better decisions about how to use site capacity.
For the network, there are three trends:
- Consolidation of power, environmental and security monitoring into a single hardware solution;
- The use of IP consolidation to reduce the number of network connections required to support monitoring;
- Improved and less expensive wired and wireless sensors.
Traditionally, power, environmental and security were three separate sub-systems. Addressing consolidation at the hardware level reduces the amount of network infrastructure dedicated to monitoring, and may simplify software setup. The idea of a smart cabinet with a single network connection for all monitoring is now a reality.
For software, there are many trends including integration of access control with power and environmental monitoring; more detailed asset management tools including databases of equipment specifications that speed software setup; better capacity monitoring and predictive modeling to explore the impact of equipment changes; a modular approach to software functionality; and a focus on making systems easier to integrate.
In the future, software-based monitoring directly from the devices, like the solution proposed by Distributed Management Task Force’s Redfish API, may replace external sensors as long as device manufacturers develop a uniform method of metering. Also, software integration may provide a system that overlays facility data, such as that from a DCIM, with compute and network utilization data to help identify the most efficient combinations of software and compute.
Data Center Frontier: The largest cloud platforms are seeking to deploy data center capacity at an accelerated rate. What has this meant to the supply chain for data center delivery?
David Knapp: Traditionally, the data center was fitted with racks, the network was cabled and then computers installed. All were separately sourced and separately installed.
For cloud, the basic design concept is standard, modular building blocks, focused at the rack- or row-level, instead of individual components. Cloud operators use integrators to deliver a complete rack full of equipment, preconfigured, cabled and ready to power. To do so, most operators have standardized on a few compute configurations and specific rack configurations. Facebook even shared those designs in The Open Compute Project (OCP), which is now also a separate marketplace.
For facilities, this means overhead cable tray and ductwork for airflow management, which is still sourced traditionally as construction materials, is installed ahead of racks. This speeds deployment because the facility is fitted in parallel to the compute being configured, instead of the traditional serial approach.
See the entire Data Center Frontier Executive Roundtable for insights from additional executives and topical stories on the latest data center trends.