Facebook Express Network Speeds Traffic Between Data Centers

May 1, 2017
As data volumes have soared along with its growth, Facebook has built a dedicated network to manage the huge flows of machine-to-machine M2M traffic between its data centers.

Facebook delivers an enormous amount of data to its 2 billion users, including the photos and videos you share with your friends. It turns out that’s just the tip of the data iceberg, dwarfed by the massive flow of data between Facebook’s data centers.

As data volumes have soared, Facebook has decided to separate its data traffic, building a dedicated network to manage the huge data flows of machine-to-machine M2M traffic between its facilities. The company will continue to use its classic backbone (CBB) to deliver status updates and photos to its users. Traffic between data centers now travels across a new network called the Express Backbone (EBB).

The initiative, which was driven by the massive growth in video and photo data uploaded by Facebook users, will allow the company to fine tune the data flows across each network and avoid “traffic jams” that create congestion.

“In recent years, bandwidth demand for cross-data center replication of rich content like photos and video has been increasing rapidly, challenging the efficiency and speed of evolution of the classic backbone,” writes Facebook’s Mikel Jimenez and Henry Kwok in a blog post. “Furthermore, machine-to-machine traffic often occurs in large bursts that may interfere with and impact the regular user traffic, affecting our reliability goals. As new data centers were being built, we realized the need to split the cross-data center vs Internet-facing traffic into different networks and optimize them individually.”

The Express Backbone was built in less than a year, and will connect the company’s global network of data center campuses, including its U.S. data center campuses in Oregon, North Carolina, Iowa and Texas, and European facilities in Lulea, Sweden and Clonee, Ireland.

A diagram of the traffic volume across Facebook’s network, and the split between M2M and user traffic. (Source: Facebook)

With the Express Backbone, Facebook engineers sought to improve upon some of the technical constraints of the “classic” backbone network. Its approach, which is explained in detail in the technical blog post, provides new levels of control over the handling of different types of traffic. That includes a hybrid model for traffic engineering (TE), using both distributed control agents and a central controller.

“This model allows us to control some aspects of traffic flow centrally, e.g., running intelligent path computations,” the Facebook team writes. “At the same time, we still handle network failures in a distributed fashion, relying on in-band signaling between Open/R agents deployed on the network nodes.

“Such a hybrid approach allows the system to be nimble when it encounters congestion or failure. In EBB, the local agents can immediately begin redirecting traffic when they spot an issue (local response). The central system then has time to evaluate the new topology and come up with an optimum path allocation, without the urgent need to react rapidly.”

The result is a “path allocation algorithm” that can be changed to address the different requirements for each class of traffic. For example, it can:

  • Minimize latency for latency-sensitive traffic.
  • Minimize path congestion for latency-insensitive traffic.
  • Schedule latency-sensitive traffic ahead of latency-insensitive traffic.

“We’re very excited to have EBB running in production, but we are continuously looking to make further improvements to the system,” the Facebook engineers reported.

“For example, we’re extending the controller to provide a per-service bandwidth reservation system. This feature would make bandwidth allocation an explicit contract between the network and services, and would allow services to throttle their own traffic under congestion. In addition, we’re also working on a scheduler for large bulk transfers so that congestion can be avoided proactively, as opposed to managed reactively.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Julia Ardaran / Shutterstock.com

Beyond Hyperscale: Quantum Frederick's Vision for Sustainable Data Center Development

Scott Noteboom, CTO of Quantum Loophole, explains how Quantum Frederick created the first carbon neutral industrial zone.

White Papers

Dcf Service Express Wp Cover 2021 12 17 9 16 03 232x300

2022 Data Center & Infrastructure Report

Dec. 20, 2021
Service Express reveals the results of their survey of 700 US IT professionals in the 2022 Data Center & Infrastructure Report. Key findings reveal the continued need for strengthening...