SANTA CLARA, Calif. – When Facebook updates its storage hardware, the design goal is always “more.” That’s the case with Bryce Canyon, the new storage unit unveiled today at the 2017 Open Compute Summit.
With Bryce Canyon, Facebook has boosted the power and capacity of its storage unit, reworking the chassis to pack in 72 hard disk drives, 12 more than its predecessor Honey Badger unit.
Instead of Honey Badger’s design using four 1U trays of disks laid flat, the Bryce Canyon chassis resembles a tub, with hard drives inserted vertically into a 4U chassis. This allows Facebook to increase the density of its storage units, packing more data into each rack. That additional capacity, multiplied across racks and rows and data halls, helps the company get more mileage out of its vast data center infrastructure.
The Rising Tide of User Data
The new storage system is part of the company’s evolving effort to manage a flood of incoming data, with users now viewing more than 100 million hours of video per day. To keep pace with all those uploads, Facebook must constantly seek new ways to add capacity.
“With a new focus on video experiences across our family of apps, our workloads are increasingly requiring more storage capacity and density,” said Facebook’s Jason Adrian in a blog post. “Our goal was to build a platform that would not only meet our needs today, but also scale to accommodate new modules for future growth.”
Boosting the disk density means more power and heat in the same space. Eran Tal, an engineering manager at Facebook, says the deeper unit actually provided the design team with improved options for creating airflow for cooling, allowing them to take in air underneath the chassis.
“We played better Tetris with the space,” said Tal. “By going to 4U we could use larger fans for better airflow. We also approached the design with a heightened engineering awareness of airflow and the thermal challenge.
New Designs Debut at Open Compute
Bryce Canyon was among a number of hardware upgrades announced by Facebook at the Open Compute Summit, the annual conference for the open hardware initiative, which the social network launched in 2011 when it opened its first data center. Facebook also unveiled its new Big Basin hardware for machine learning, as well as two new server designs.
The design specifications for Bryce Canyon and the other new Facebook designs will be released through the Open Compute Project, with a comprehensive set of hardware design files soon to follow.
Bryce Canyon is the latest in a series of updates to improve Facebook’s storage architecture. In 2013, Facebook and design partner Wiwynn contributed the first Open Vault storage enclosure (known as Knox) to the Open Compute Project, and then leveraged that design in 2015 to create Honey Badger, followed by the Lightning NVMe enclosure in 2016.
Free Resource from Data Center Frontier White Paper Library
Innovation in Storage Design
Facebook has also developed a cold storage data center design to house photos and data that are accessed infrequently by Facebook users. It’s a web-scale implementation of tiered storage, a strategy that organizes stored data into categories based on their priority and then assigns the data to different types of storage media (or in this case, facilities) to reduce costs.
The goal is to create a top tier consisting of high-performance enterprise hardware and networking, while lower tiers can use commodity hardware or, for rarely-used assets, backup tape libraries.
Facebook is now using Blu-Ray disks in its cold storage, teaming with Panasonic to introduce Freeze Ray Optical Data Archiver, a commercial version of the Blur-Ray technology that was protoyped at the 2014 Open Compute Summit . Blu-Ray disks offers savings of up to 50 percent compared with the first generation of Facebook’s cold storage design, since the Blu-Ray cabinet only uses energy when it is writing data during the initial data “burn,” and doesn’t use energy when it is idle.
The Bryce Canyon storage system supports 72 3.5” hard drives with 12 GB capacity and can be system can be configured as a single 72-drive storage server, as dual 36-drive storage servers with fully independent power paths, or as a JBOD (Just a Bunch Of Disks). When configured as a storage server, Bryce Canyon supports single or dual Mono Lake CPU modules.
Toolless Design Eases Maintenance
“We discovered that for certain workloads, such as web and storage, a single-socket architecture is more efficient and has higher performance per watt,” writes Adrien. “We previously moved our web tier to utilize this architecture and have implemented the same Mono Lake building blocks in the Bryce Canyon platform.”
Like most of Facebook’s hardware designs, the Bryce Canyon system features a toolless design. Every major field-replaceable unit (FRU) can be replaced without the use of tools, in luring a toolless drive retention system that uses a latch mechanism to retain the bare drive. For removal, the latch assists the user by pulling the drive partially out of the system for easier handling. This simplifies deployment and maintenance, as there is no need to use carriers when adding drives.