Facebook Express Network Speeds Traffic Between Data Centers

May 1, 2017
As data volumes have soared along with its growth, Facebook has built a dedicated network to manage the huge flows of machine-to-machine M2M traffic between its data centers.

Facebook delivers an enormous amount of data to its 2 billion users, including the photos and videos you share with your friends. It turns out that’s just the tip of the data iceberg, dwarfed by the massive flow of data between Facebook’s data centers.

As data volumes have soared, Facebook has decided to separate its data traffic, building a dedicated network to manage the huge data flows of machine-to-machine M2M traffic between its facilities. The company will continue to use its classic backbone (CBB) to deliver status updates and photos to its users. Traffic between data centers now travels across a new network called the Express Backbone (EBB).

The initiative, which was driven by the massive growth in video and photo data uploaded by Facebook users, will allow the company to fine tune the data flows across each network and avoid “traffic jams” that create congestion.

“In recent years, bandwidth demand for cross-data center replication of rich content like photos and video has been increasing rapidly, challenging the efficiency and speed of evolution of the classic backbone,” writes Facebook’s Mikel Jimenez and Henry Kwok in a blog post. “Furthermore, machine-to-machine traffic often occurs in large bursts that may interfere with and impact the regular user traffic, affecting our reliability goals. As new data centers were being built, we realized the need to split the cross-data center vs Internet-facing traffic into different networks and optimize them individually.”

The Express Backbone was built in less than a year, and will connect the company’s global network of data center campuses, including its U.S. data center campuses in Oregon, North Carolina, Iowa and Texas, and European facilities in Lulea, Sweden and Clonee, Ireland.

A diagram of the traffic volume across Facebook’s network, and the split between M2M and user traffic. (Source: Facebook)

With the Express Backbone, Facebook engineers sought to improve upon some of the technical constraints of the “classic” backbone network. Its approach, which is explained in detail in the technical blog post, provides new levels of control over the handling of different types of traffic. That includes a hybrid model for traffic engineering (TE), using both distributed control agents and a central controller.

“This model allows us to control some aspects of traffic flow centrally, e.g., running intelligent path computations,” the Facebook team writes. “At the same time, we still handle network failures in a distributed fashion, relying on in-band signaling between Open/R agents deployed on the network nodes.

“Such a hybrid approach allows the system to be nimble when it encounters congestion or failure. In EBB, the local agents can immediately begin redirecting traffic when they spot an issue (local response). The central system then has time to evaluate the new topology and come up with an optimum path allocation, without the urgent need to react rapidly.”

The result is a “path allocation algorithm” that can be changed to address the different requirements for each class of traffic. For example, it can:

  • Minimize latency for latency-sensitive traffic.
  • Minimize path congestion for latency-insensitive traffic.
  • Schedule latency-sensitive traffic ahead of latency-insensitive traffic.

“We’re very excited to have EBB running in production, but we are continuously looking to make further improvements to the system,” the Facebook engineers reported.

“For example, we’re extending the controller to provide a per-service bandwidth reservation system. This feature would make bandwidth allocation an explicit contract between the network and services, and would allow services to throttle their own traffic under congestion. In addition, we’re also working on a scheduler for large bulk transfers so that congestion can be avoided proactively, as opposed to managed reactively.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Adobe Stock, courtesy of AFL
Source: Adobe Stock, courtesy of AFL

Advanced Networking for AI and ML: Scaling to Meet Tomorrow’s Demand

Manja Thessin, Enterprise Market Manager for AFL, explains how data center operators can confidently and quickly achieve the performance, scalability, and efficiency required ...

White Papers

Get the full report.

From Console to Cloud

April 7, 2022
This white paper from Iron Mountain explores the current challenges, drivers, and opportunities for gaming digital infrastructure.