Roundtable: The Shape of Edge Computing is Still Emerging

Nov. 15, 2017
In our Executive Roundtable, data center thought leaders debate the most promising opportunities in edge computing, in terms of location, form factor and timing.

Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – IO Chief Operating Officer Rick Crutchley, Marvin Rowell of BASELAYER, Vertiv VP of marketing & strategy Amy Johnson and Jeff Klaus, GM of Intel Data Center Software Solutions – discuss the most promising opportunities in edge computing.

The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.

Data Center Frontier:  Everyone is talking about the edge and the potential for a more distributed infrastructure, but there seem to be many definitions of its location. What do you see as the most promising opportunities in edge computing, in terms of location, form factor and timing?

RICK CRUTCHLEY of IO.

Rick Crutchley: Every few years the data center pendulum seems to swing from centralized to distributed and back again. Just when we thought we’d be housing data at massive, centralized locations, the Internet of Things (IoT) and big data analytics came into play.

Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. As drones, wearables, self-driving cars, robots, and other “devices” still to come take hold, there will be a need to quickly process all of this data not in a centralized data center or even the cloud, where low latency can negatively impact speed, but in a location nearest to where data can quickly be processed in real time. In other words, we’re going to see last-mile device connectivity and edge computing nodes that aggregate device data.

These edge computing nodes have already taken the form of complete, self-contained modules containing the physical infrastructure into which servers/racks and other IT gear can be easily integrated. These modules can be located within the data center itself or into non-climate controlled environments—such as military operations, mining, oil and gas, and other distributed operations—for split-second latency that could deliver real benefits.

These modules can be deployed in relatively short amount of time, say, as few as 12-18 weeks. This is a huge leap over the months and years previously required just to refresh a legacy or corporate data center.

In the future, we could see the devices themselves with the ability to process their own data and move it back into the cloud for machine learning purposes.

AMY JOHNSON, Vertiv

Amy Johnson: We see the edge as anywhere and everywhere we find one or more IoT devices and/or sensors requiring what the military community used to call C3I – command, control, communications, and intelligence. In IT-speak this means the devices require not only a gateway, but some level of computational resources, storage, and the ability to manipulate data and provide services locally without the need to send 100 percent of the data upstream to a data center for processing.

We see numerous applications emerging, that will likely consolidate into 8 to 10 key archetypes over time. Let’s focus on two applications today. The first is fully integrated systems that incorporate local client and regulatory knowledge and skilled resources in a customized solution. We have had success with our systems integrators managing local and regional projects like this that scale to the low hundreds of installations. These range in size from 1 to 4 rack passive (no cooling) systems to row-based systems with 5 to 8 racks employing active cooling, fire suppression, local DCIM, remote monitoring, and UPS and power distribution, all the way up to aisle-based deployments with 14 to 20 racks. Examples include micro-datacenter infrastructure for IT equipment or communications closets for first responders, branch offices, small businesses, schools and universities, and industrial plants.

The second involves higher-volume applications for large-scale deployments within factory, industrial, process or government segments (think smart cities). In these cases, a single SKU configuration developed as a customized, application-specific solution, provides a better overall fit, improved pricing and, most importantly, the quality and repeatability only available from factory production. Examples include single-rack integrated point-of-presence systems for cities, hospitals, and large retail centers where a common design is replicated across numerous similar facilities across a wide region or, as in the case of large retail companies, globally.

When it comes to today’s edge of network, one thing is consistent across location, application or architecture – these deployments are becoming more and more mission critical and must be supported accordingly.

MARVIN ROWELL, BASELAYER

Marvin Rowell: To many, edge computing is defined as entering a new market or expanding operations to a smaller population center. With greater demand for products/services such as IoT, 5G/6G, and high bandwidth content distribution, this way of defining edge computing only scratches the surface. If we segment internet connectivity into Long Haul, Metro Core, Metro Edge, and Local Access we see opportunities at the Metro Edge layer for both rural/urban environments; BASELAYER recently co-presented with Clint Poole of Salt River Project (SRP) on this topic.

If large-scale data centers typical reside in the Long Haul or Metro Core zones, the Metro Edge (potentially at utility substations and/or cell towers site) offers the most opportunity for the expanding edge infrastructure. A distributed modular data center footprint in this zone would fill in the gaps and connect centralized nodes with fiber enabled residential and commercial endpoint locations. We estimate that this will take place in the next two to three years to allow for high availability data center capacity across all layers of the network stack.

JEFF KLAUS, Intel

Jeff Klaus: FPGA technologies are enabling this area. Its flexibility will help handle unique data at the source and can provide an area of innovation and new services.

Editor’s note: FPGAs (field programmable gate arrays) are semiconductors that can be reprogrammed to perform specialized computing tasks. FPGAs allow users to tailor compute power to specific workloads or applications.

NEXT: Whither the Corporate Data Center?

Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below:

Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

iStock, courtesy of AFL

Hyperscale: The AI Tsunami

AFL's Alan Keizer and Keith Sullivan explore how AI is driving change and creating challenges for data centers.

White Papers

Get the full report.
Get the full report.
Get the full report.
Get the full report.
Get the full report.

Focusing on Data Center Expertise

Feb. 19, 2022
A new paper from CBRE looks at the importance of outsourcing as a way of delivering real-world data center facility management success.