Executive Insights: Samir Shah of BASELAYER

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Samir Shah of BASELAYER. […]

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Samir Shah of BASELAYER.

Samir Shah, VP of Product Management, BaseLayer

Samir Shah serves as Chief of Staff at BASELAYER. In this role, reports directly to the CEO and is responsible for enterprise sales, corporate brand/marketing, product management, and strategic corporate initiatives. Prior to this role, Samir was VP, Product Management, Marketing and Customer Service responsible for creating BASELAYER’s Intelligent Control platform including platform architecture, roadmap development, marketing, and customer delivery. Prior to BASELAYER, Mr. Shah was Group Leader, Product Management for IO. During his time at IO, Samir’s key accomplishments included launching IO’s modular data center platform technology and global deployment of one of IO’s largest enterprise clients Goldman Sachs. Before joining IO, Samir spent 10 years in advanced system integration and R&D for IBM, JDS Uniphase and Hitachi Global Storage Technologies. His experience also includes working for venture capital and private equity firms such as Sterling Partners, ARCH Venture Partners, and MK Capital; Samir focused on investments specific to high growth technology companies. A native of Chicago, Samir has a BS and MS in Electrical Engineering from the University of Illinois, and an MBA from the University of Chicago Booth School of Business.

Here’s the full text of Samir Shah’s insights from our Executive Roundtable:

Data Center Frontier: The long-predicted migration of enterprise IT workloads into third-party data centers appears to be gaining momentum. A key argument for keeping data and applications on-premises has been security. With the ongoing series of corporate data compromises, can service provider facilities – whether cloud or colocation – now make the case that they are more secure than the on-premises data center?

Samir Shah: Service providers can offer a value proposition focused on security by addressing several key areas in their deployment models.

The first area to highlight is physical security. We’ve seen success working with service providers who deploy modular data centers to provide physical separation between customers. A layered approach allows for access based on individual needs. Note that the Department of Defense (DoD) has weighed in on physical security by creating a standard (ICD 705 Version 4.1) to help ensure robust infrastructure.

The second security focus area for a service provider is to leverage their diverse customer base to stay current on various security standards. As a result, best practices employed by large enterprises trickle down to mid/small customers who otherwise would not have access to these capabilities based on their lack of scale. Lastly, service providers should use analytics to capture anomalies which take place in their data center environments. As an example, we’ve seen service providers track changes in firmware code size to detect attacks on critical systems (generators, chillers, switchgear, etc.).

The clear trend for the industry is increased trust in moving highly sensitive workloads to shared environments. A great example of this took place earlier this year when the DoD moved classified data to Amazon’s cloud to centralize department data and systems. We believe this to be the first in many similar announcements from security focused end users.

Data Center Frontier: Cloud platforms are seeking to expand at an accelerated rate. What has this meant to the data center supply chain and construction practices? What are the areas where data center developers can innovate to further improve the speed of deployment (and help address the capacity planning challenge)? Or have we reached our “speed limit” for data center deployment?

Samir Shah: There have been advancements in the time to deploy data center capacity. However, we have only scratched the surface in terms of speed and efficiency. When talking to individuals in the cloud industry, we often hear about their demand in terms of massive scale reaching hundreds of megawatts per site. To improve deployment speed, cloud and data center companies need to work on forward-looking capacity planning models. This will allow for shorter overall lead times through traditional manufacturing best practices such as safety stock and inventory management.

From a construction, real estate, and permitting perspective, standardization is the key to reducing cycle time. To help local officials, adoption of standards such as UL-2755 would streamline the permitting process and improve delivery timelines.

A BASELAYER module at the SRP DataStation near Phoenix. (Photo: BASELAYER)

Data Center Frontier: The EU’s General Data Protection Regulation (GDPR) has finally arrived. What have been the most significant issues in the transition for data center providers and their customers? Are we likely to see similar privacy regulation in the U.S., and if so, what are the implications for the data center industry?

Samir Shah: It is hard to predict political and regulatory trends in the US and other regions. But companies looking to do business with EU citizens will be forced to address GDPR in a short timeframe. One clear short-term result from this regulation will be the need for a multi-zone data center strategy. In this new paradigm, having a consistent unit of data center deployment will be critical to ensuring deployment speed, uniformity, and cost structure savings across a distributed geographic footprint.

Data Center Frontier: All our “Things” are getting smarter and more powerful, bringing the computing power of the cloud into devices in our pockets. As more workloads begin to move to the edge of the network, how will it change how – and where – we deploy infrastructure?

Samir Shah: As workloads move closer to the edge, data center capacity will be needed anywhere users connect with wireless signal sources. In today’s 4G LTE networks, round trip latency ranges between 60-70ms. In the post 5G world, the only way to achieve “1G to your device” is to reduce the latency to less than 8ms. Physical limitations will result in data center resources being colocated at signal sources to eliminate delays caused by peering hops.

Another way to look at this issue is to compare internet transport traffic to a supply chain distribution system (i.e. Amazon One Day shipping). In any supply chain model, as transit time is reduced, the system realizes improved throughput. In the 5G use case, achieving a 1-8ms transit time increases throughput from 18-21 Mb/s in a 4G network up to 1,288 Mb/s.

In terms of how these deployments will look, traditional data center building blocks (UPS, batteries, generators, etc.) will not translate “as-is” to edge environments. Manufactures and technology providers will have to take a hard look at their factors, features, and flexibility in the early adoption phase of edge deployments.