• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Network / Google: New Algorithm Will Make Our Cloud Platform Faster

Google: New Algorithm Will Make Our Cloud Platform Faster

By Rich Miller - July 20, 2017

Google: New Algorithm Will Make Our Cloud Platform Faster

A Google Jupiter network switch. (Image: Google)

LinkedinTwitterFacebookSubscribe
Mail

Google is wielding its expertise in network optimization in the cloud platform wars. The company is enhancing its Google Cloud Platform services with a cutting-edge algorithm to reduce network congestion, which should offer improved application and web site performance for customers.

Google developed the technology, known as BBR (short for “Bottleneck Bandwidth and Round-trip propagation time”), and says it has accelerated performance for its in-house properties. “Deploying BBR has resulted in higher throughput, lower latency, and better quality of experience across Google services, relative to the previous congestion control algorithm, CUBIC,” said Neal Cardwell, senior staff software engineer.

It’s the latest in a series of move by Google to boost the competitive position of Google Cloud Platform (GCP) in the high-stakes battle for cloud dominance. Earlier this week it unveiled Google Transfer Appliance, which can be used to physically transfer large volumes of data to the GCP cloud platform. Google’s chief rivals – Amazon Web Services, Microsoft and Oracle – continue to roll out new features as well.

Speed As a Competitive Metric

In seeking to differentiate its cloud, Google is leveraging its reputation for fast response time – which Google users experience first-hand every time they type a search query.

“At Google, our long-term goal is to make the Internet faster,” Cardwell writes in a blog post from the Google Cloud team announcing the BBR rollout. “Over the years, we’ve made changes to make TCP faster, and developed the Chrome web browser and the QUIC protocol. BBR is the next step.”

Google isn’t alone in this effort. All the major hyperscale players invest heavily in optimizing their networks. Amazon Web Services is developing custom semiconductors to accelerate its cloud network, fine-tuning chips to move data faster between its data centers. Facebook has built a dedicated network to manage the huge data flows of machine-to-machine (M2M) traffic between its facilities.

The major cloud builders are also interested in next-generation networking technologies like silicon photonics, which was among the demos at the Open Compute Project summit, which showcases new hardware for the hyperscale crowd.

Several Benefits for Customers

Early users of BBR on the Google Cloud say they have seen a difference in performance.

“BBR allows the 500,000 WordPress sites on our digital experience platform to load at lightning speed,” said Jason Cohen, Founder and CTO of WP Engine. “According to Google’s tests, BBR’s throughput can reach as much as 2,700x higher than today’s best loss-based congestion control; queueing delays can be 25x lower. Network innovations like BBR are just one of the many reasons we partner with GCP.”

Google says customers can automatically benefit from BBR in two ways:

Free Resource from Data Center Frontier White Paper Library

cloud data centers
Yes, the Cloud Is a Catalyst; It’s Also a Competitive Benchmark
Cloud data centers are typically located where the metrics of  total cost of ownership, flexibility, performance, and ‘righteousness’ are optimized. This white paper provides an overview of the US markets with the lowest total cost of ownership by ranking them based on land, energy, network and labor costs.
We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

Get this PDF emailed to you.

  • Traffic Movement Within Google Cloud: First, when GCP customers talk to GCP services like Cloud Bigtable, Cloud Spanner, or Cloud Storage, the traffic from the GCP service to the application is sent using BBR. This means speedier access to your data.
  • Traffic Movement from Google Cloud to Internet users: When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their web site, the content is sent to users’ browsers using BBR. This means faster web page downloads for users of your site.

Google says implementing BBR improved YouTube network throughput by 4 percent on average globally, and by more than 14 percent in some countries. “These represent substantial improvements for all large user populations around the world, across both desktop and mobile users,” said Cardwell. “These results are particularly impressive because YouTube is already highly optimized; improving the experience for users watching video has long been an obsession here at Google.”

What Exactly is BBR?

Congestion control algorithms determine how fast a device should to send data, and run inside every computer, phone, or tablet. The Internet has largely used loss-based congestion control, relying on indications of lost packets as the signal to slow down.

“We need an algorithm that responds to actual congestion, rather than packet loss,” Cardwell writes. “BBR targets this with a ground-up rewrite of congestion control. We started from scratch, using a completely new paradigm: to decide how fast to send data over the network, BBR considers how fast the network is delivering data. For a given network connection, it uses recent measurements of the network’s delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it is willing to allow in the network at any time.”

While Google is leveraging its leadership in web performance, it is playing catch-up in other areas, seeking to match existing features offered by other cloud platforms. A case in point: the Google Transfer Appliance, which follows last year’s announcement of Amazon Snowball.

Shipping Data to the Cloud

Both services address an ancient problem in computing – how to move huge amounts of data without clogging the network pipes. This problem was famously described by computer scientist Andrew Tanenbaum, who counseled to “never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

Several form factors for the Google Transfer Appliance, which allows customers to ship large volumes of data to be ingested by the Google Cloud Platform. (Photo: Google)

Several form factors for the Google Transfer Appliance, which allows customers to ship large volumes of data to be ingested by the Google Cloud Platform. (Photo: Google)

“Working with customers, we’ve found that the typical enterprise has many petabytes of data, and available network bandwidth between 100 Mbps and 1 Gbps,” writes Ben Chong, Google’s product manager for Transfer Appliance. “Depending on the available bandwidth, transferring 10 PB of that data would take between three and 34 years — much too long. Sometimes the best way to move data is to ship it on physical media.”

Transfer Appliance slides into a standard 19-inch rack. With capacity of up to one-petabyte compressed, Transfer Appliance helps migrate data faster than over a typical network. The appliance encrypts customer data at capture, and isn’t decrypted until it reaches its final cloud destination.

“Like many organizations we talk to, you probably have large amounts of data that you want to use to train machine learning models,” said Chong. “”You have huge archives and backup libraries taking up expensive space in your data center. Or IoT devices flooding your storage arrays. There’s all this data waiting to get to the cloud, but it’s impeded by expensive, limited bandwidth. With Transfer Appliance, you can finally take advantage of all that GCP has to offer — machine learning, advanced analytics, content serving, archive and disaster recovery — without upgrading your network infrastructure or acquiring third-party data migration tools.”

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Google

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.

About Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Understanding the Differences Between 5 Common Types of Data Centers

Understanding the Differences Between 5 Common Types of Data Centers No two are data centers are alike when it comes to design or the applications and data they support with their networking, compute and storage infrastructure. Shad Secrist of Belden outlines the differences between 5 of the most common types of data centers including edge, colocation and hyperscale.

White Papers

The Cloud is Better at the Edge

This white paper explains how edge computing helps to solve many of the challenges associated with using the cloud for your digital transformation strategy.

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Critical Power Energy Manager - Data Center Development - Ashburn, VA
  • Site Development Manager - Data Center - Ashburn, VA
  • Data Center Facility Operations Director - Chicago, IL
  • Electrical Engineer - Senior - Dallas, TX
  • Mechanical Commissioning Engineer - Calgary, Alberta

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Data Center Frontier LLC © 2022