NEW YORK – Edge computing isn’t just a hot trend, says Yuval Bachar. It’s the future of IT infrastructure.
“We need to have a new Internet,” said Bachar, the Principal Engineer for Data Center Architecture at LinkedIn. “The current Internet doesn’t give us what we need. It’s not regional anymore. We have to rebuild it on localization.”
Bachar shared his views on edge computing, and what LinkedIn is doing to prepare for it, in a recent presentation at the DCD Edge conference in New York. His convictions about the future were clear in the title of his presentation: “EdgeCloud, The Next Mega Data Center.”
LinkedIn sees the edge as a “once in a generation innovation cycle,” said Bachar, and has spent several years developing infrastructure to support this new architecture. This began with a small network of micro data centers that LinkedIn deployed as a proof of concept in several markets. It continues with its work with the Open19 initiative, an open hardware initiative co-founded by LinkedIn to develop standardized solutions for smaller data centers, including designs optimized for edge computing.
One Company’s Approach to Edge Demand
The rise of edge computing is driven by the increased use of consumer mobile devices, especially consumption of video and virtual reality content and the growth of sensors as part of the Internet of Things. Artificial intelligence emerged as a major edge use case in 2017, while the 800-pound-gorilla of edge traffic – the autonomous car – looms on the horizon.
Companies across the technology sector are preparing for edge computing. LinkedIn’s EdgeCloud thesis provides an example of how one large social platform is thinking about the future infrastructure, and adapting its data center and network infrastructure for the road ahead. It’s not a small job.
“EdgeCloud will be the future of the data center,” said Bachar. “It’s a very different architecture.”
Bachar sees the need for many small data centers at telecom towers, the edgiest of several scenarios for the evolution of edge computing. The key challenge, he says, is managing machine-to-machine (M2M) interactions, which will generate large amounts of data that must be processed and analyzed. Bachar believes that only about 5 percent of that data will move across the network to core data centers, and that will require analytics at the edge to identify a smaller dataset that has business value. This requires compute power and data storage in new places, where there is currently no data center capacity.
LinkedIn began experimenting with edge infrastructure several years ago with deployments of LinkedIn Edge Connect, a micro-modular data center. LinkedIn deployed five facilites – two in the U.S., two in India and one in China. Each Edge Connect facility housed 8 to 16 servers and one switch, and supported 50 GBps peering and transit.
The areas where LinkedIn deployed Edge Connect units saw a 15 to 50 percent improvement in the user experience, Bachar said, confirming the benefits of edge infrastructure.
Free Resource from Data Center Frontier White Paper Library
Open19 as Plaform for the Edge at Scale
Those results prompted LinkedIn to go back to the lab and think more deeply about its hardware and data center designs. In 2016, the company had just come through an 18-month journey in which it doubled its infrastructure and overhauled its design. The result was a cutting-edge data center in Portland, Oregon that featured one of the largest deployments yet of rear-door chilling units that support extreme power density.
The Portland data center was highly optimized around LinkedIn’s requirements. With edge deployments on the horizon, a more flexible design solution was required. Part of the answer is Open19, which the LinkedIn team created to offer options beyond the Open Compute Project, particularly for smaller data centers. Both projects emphasize rack-level design, disaggregated systems and modular integration of components. Bachar says Open19 is differentiated by its focus on a 19-inch rack and licensing terms that allow participants better control over their intellectual property.
Open19 seeks to use “building block” elements to create repeatable infrastructure designs that hat can be used in everything from hyperscale facilities to edge micro-modules. Several of the early participants in Open19 are focused on edge computing, including Vapor IO, Flex, Crown Castle and GE Digital, which is developing the Predix platform for industrial Internet of Things applications.
Bachar said the growing Open19 ecosystem provides options for LinkedIn as it adds edge capacity. Future edge facilities will support between 50 kW and 175 kW of IT capacity per location, and 50 to 100 servers.
“We plan to deploy this in many locations,” said Bachar. “The EdgeCloud is here and now. It’s happening.”
Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.