Executive Insights: Joel Stone of RagingWire Data Centers

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Joel Stone of RagingWire […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Joel Stone of RagingWire Data Centers.

JOEL STONE, RagingWire

Joel Stone is Senior Vice President and Chief Operating Officer at RagingWire Data Centers, responsible for data center engineering, design, construction, and operations, and corporate information technology. Prior to joining RagingWire, Joel was Vice President of Global Data Center Operations for CenturyLink, responsible for 60 data centers around the world and a global team of 600 people. Before CenturyLink, Joel was Group Operations Director at Global Switch in London, one of the largest wholesale data center providers in Europe, and spent nine years at Microsoft responsible for 16 data centers in North America and a team of nearly 500 internal and external resources. Mr. Stone earned a Bachelor of Science degree in General Business Administration, with a minor in Computer Science from Tennessee Temple University.

Here’s the full text of Joel Stone’s insights from our Executive Roundtable:

Data Center Frontier: The long-predicted migration of enterprise IT workloads into third-party data centers appears to be gaining momentum. A key argument for keeping data and applications on-premises has been security. With the ongoing series of corporate data compromises, can service provider facilities – whether cloud or colocation – now make the case that they are more secure than the on-premises data center?

Joel Stone: There are two kinds of data center security – cybersecurity, which protects the computer systems, applications, and data; and physical security, which prevents unauthorized access to or malicious actions in the data center facility. From a cyber perspective, the enterprises can implement the same cyber procedures and technologies in a colo facility that they can in an on-premises data center. So that’s a benefit. The physical security is typically a big improvement for an enterprise when they move to colo, as the colo facility will be able to leverage its scale and expertise to implement multi-layered, advanced security systems and officers as well as robust operational security processes.

At RagingWire, we typically have more points of security between the parking lot and server floor than what is required to access a TOP SECRET Sensitive Compartmented Information Facility (SCIF) in most U.S. government buildings. By hiring in-house, full-time, well-trained security guards who often have backgrounds in the military or law enforcement and outfitting them with sophisticated monitoring systems, every point in our facilities is watched diligently and attentively 24/7/365.

Our high-tech security barriers are in place at the perimeter of the site, with anti-ram barrier arms (that can stop a 15,000-pound truck traveling at 30 mph), anti-climb fences, badge access doors with code requirements, biometric readers, anti-tailgate access control turnstiles, high-def cameras that analyze suspicious behavior, and other devices we are constantly evaluating for their effectiveness in keeping our customers, employees, and equipment safe from harm.

Data Center Frontier: Cloud platforms are seeking to expand at an accelerated rate. What has this meant to the data center supply chain and construction practices? What are the areas where data center developers can innovate to further improve the speed of deployment (and help address the capacity planning challenge)? Or have we reached our “speed limit” for data center deployment?

Joel Stone: We definitely have not reached the speed limit for data center deployment. While there is a lot of data center construction underway, there are only a handful of large scale providers that have a well-developed supply chain of equipment and materials that can truly reduce time-to-market and increase return-on-invested-capital. Our experience is that the best practices for data center construction and supply chain management are:

Creating a Vigorous Company Culture: Currently, many data center supply chains are focused on expensive and long lead infrastructure equipment. These points of emphasis are joined by a comprehensive logistics management approach to ensure supply, while minimizing upfront investment. However, the process shouldn’t stop there. A vigorous company culture that repeatedly focuses on what is constraining time-to-market – and then eliminates those constraints – is where ultimate optimization occurs. This approach applies across all aspects of development, including demand forecasting, land acquisition, permitting, services, structure, equipment, building materials, contracting and compensation strategies, etc. It is a never-ending battle, and time is the enemy.

Contracting & Collaboration: The industry is already seeing supply constraints, and in some cases “allocations” from product and service suppliers. Creative and long-term contractual relationships are key to reducing “commercial churn” and focusing the combined team’s energies on improving delivery efficiencies as well as securing supply priorities and guarantees.

Developing and Managing Multiple Critical Paths: Developing a delivery approach with many parallel activities will optimize delivery. For example, we use a modular integrator to assemble electrical and mechanical infrastructure in their quality-controlled assembly environment, parallel (or prior) to the building’s structure being raised. In conventional delivery, mechanical and electrical trades cannot begin rough-in until a structure is in place. Eliminating this dependency and moving off-site with parallel activities shortens time-to-market. There are many other opportunities to do this, but success is not possible without a healthy culture and contracting approach as discussed above.

The RagingWire Data Centers VA3 facility in Ashburn, Virginia. (Photo: Rich Miller)

Data Center Frontier: The EU’s General Data Protection Regulation (GDPR) has finally arrived. What have been the most significant issues in the transition for data center providers and their customers? Are we likely to see similar privacy regulation in the U.S., and if so, what are the implications for the data center industry?

Joel Stone: The European Union’s General Data Protection Regulation (GDPR) went into effect on May 25, 2018 with over 100 separate articles that have to be addressed; the bulk of which concern the privacy, protection, and handling of data about EU citizens.

In the U.S., most data protection issues are familiar to anyone who has worked under PCI DSS, ISO 27001, or NIST regulations, so these are fairly readily addressed. That said, “72 hour breach reporting” has caused both legal and IT departments to up their game. IT and legal teams are now on the hook to analyze whether exposed or affected data can cause “risk to the rights and freedoms” of EU data subjects. Of course, these rules apply across all industries, and aren’t specific to data center providers.

Methods of consent and data collection must change. Companies can no longer bury language about how they are going to leverage user data in a separate and long “End User License Agreement” or “Terms and Conditions” document. Those methods have to be “Freely given, specific, informed, and unambiguous.” So that means companies may change how they collect and use Personally Identifiable Information.

I’d say it’s highly likely that we’ll see similar privacy regulations in the U.S. After so many breaches, U.S. citizens are worried about data collection and analytics methods conducted not just by businesses, but also the government.

Unlike in the EU, the U.S. doesn’t have a single comprehensive federal law regulating the collection and usage of personal data. However, lawmakers are continually pressing for revisions to existing data handling standards such as PCI DSS, ISO 27001, and laws and regulations such as NIST, the Federal Trade Commission Act (FTC Act), Children’s Online Privacy Protection Act (COPPA), and HIPAA HITRUST.

Simply put, yes, we’ll likely see an increase in similar privacy regulations in the U.S. We can only guess at the impact, but we believe they may be similar to the new EU regulations now in place.

Data Center Frontier: All our “Things” are getting smarter and more powerful, bringing the computing power of the cloud into devices in our pockets. As more workloads begin to move to the edge of the network, how will it change how – and where – we deploy infrastructure?

Joel Stone: The methods, the locations, and the technologies we use to deploy infrastructure are changing.

For example, the advent of 5G which focuses on mobile broadband, low-latency communications, and massive machine technologies such as autonomous vehicles, are pushing us to change our paradigm for edge, fog, and cloud computing. Everyone is impacted, from data center providers focusing on building a high degree of network and interconnection systems, down to the basic software architectures that will allow for more resilient consumer applications that sit on devices like your car, TV, or home thermostat.

More specifically, edge computing in its most raw form exists in the palm of your hand. True edge computing is pushing the computational load down to the ubiquitous smart phone or tablet devices; because the edge is where the internet meets the physical world.

This same model holds true for the Internet of Things. Our “Things” are the edge – they are the devices that touch and interact with the physical world. In the data center industry, this may be as simple as a device that measures temperature and humidity changes, then meshes with other devices to decide how to normalize or stabilize the environment. Semi-autonomous networks and systems have been programmatically or procedurally doing this for quite some time. Machine learning makes them more efficient but may require more compute horsepower than our individual “Things” currently possess.

In the data center industry, we’re looking at a fundamental change in the way people design and implement their applications, network, and compute environments. We see a higher degree of focus on resilient, self-healing, and interconnected, low-latency communication out to these edge devices, and we’re seeing a growth in the amount of compute that exists outside our “four-walls.”

At the same time, we’re seeing a shift in the amount of analytical data processing, or “big data” being converted to “small data”… that is, data that may be collected from a large mesh of these edge systems, analyzed and made useful to an individual.

And there’s your paradigm shift: the shift to collecting “big data” from a ton of end point devices, and making it hyper-useful to an individual person, or to a single node on the system. Consider the GPS system in your car making a decision to re-route you based on traffic data from thousands of other cars and handsets – in real time. Now imagine if you were a first responder, how critical is the infrastructure that allows you to receive these updates?

The shift in infrastructure is partially driven by the required resiliency of these systems. Some of that infrastructure will necessarily be installed logically, and in some cases physically, closer to the edge devices and with a higher degree of interconnectedness.