• About Us
  • Partnership Opportunities
  • Privacy Policy

Data Center Frontier

Charting the future of data centers and cloud computing.

  • Cloud
    • Hyperscale
  • Colo
    • Site Selection
    • Interconnection
  • Energy
    • Sustainability
  • Cooling
  • Technology
    • Internet of Things
    • AI & Machine Learning
    • Edge Computing
    • Virtual Reality
    • Autonomous Cars
    • 5G Wireless
    • Satellites
  • Design
    • Servers
    • Storage
    • Network
  • Voices
  • Podcast
  • White Papers
  • Resources
    • COVID-19
    • Events
    • Newsletter
    • Companies
    • Data Center 101
  • Jobs
You are here: Home / Cloud / Sturdier Servers: Cloud Platforms Say Servers Living Longer, Saving Billions

Sturdier Servers: Cloud Platforms Say Servers Living Longer, Saving Billions

By Rich Miller - February 14, 2022

Sturdier Servers: Cloud Platforms Say Servers Living Longer, Saving Billions

A long row of racks housing servers inside the Google data center in Mayes County, Oklahoma. (Photo: Google)

LinkedinTwitterFacebookSubscribe
Mail

As of 2020, the estimated lifespan of a server on a cloud computing platform was three years, an assumption embraced by all three of the leading platforms. But that has changed over the past two years, with all three cloud providers extending their estimates for how long they’ll use their IT equipment.

Amazon Web Services now says its servers have a ‘useful life” of five years, while Google and Microsoft expect servers to last for four years.

These estimates are developed for accounting purposes, and can shift billions of dollars to a company’s bottom line by deferring spending on hardware and enabling assets to be expensed over a longer period. But the trend toward more durable cloud servers is also likely influenced by design refinements, and may help manage server fleets differently to support continued growth during supply chain challenges in the semiconductor industry.

‘Tremendous Effort’ to Extend Server Life

AWS has bumped up its estimate of server lifespan in each of the past two years, as noted in The Register, citing “continuous improvements in our hardware, software, and data center designs.” Cloud providers design their own gear and data centers, which helps them optimize the performance and energy efficiency of their hardware. Longevity has perhaps been less of an emphasis, given the expectation that these providers will refresh their servers every three years to upgrade to more powerful chips.

But that’s changing, according to Amazon Chief Financial Officer Brian Olsavsky.

“Although we’re calling out an accounting change here, this really reflects a tremendous team effort by AWS to make our server and network equipment last longer,” Olsavsky said in Amazon’s 4th quarter earnings call. “We’ve been operating at scale for over 15 years, and we continue to refine our software to run more efficiently on the hardware. This then lowers stress on the hardware and extends the useful life, both for the assets that we use to support AWS’ external customers as well as those used to support our own internal Amazon businesses.”

When you run millions of servers and storage units, modest changes in their cost and lifespan can become a powerful tool to wring expenses out of IT operations. That’s reflected in data points from the companies’ 10K filings.

  • In July 2020, Microsoft extended the estimated useful life from 3 years to 4 years for servers, and from 2 years to 4 years for network equipment. The company estimated the change would boost operating income by $2.7 billion for fiscal year 2021.
  • In January 2021, Alphabet adjusted the estimated useful life of servers from 3 years to 4 years and network equipment from 3 years to 5 years. The company said the change would boost operating income by $2 billion for 2021.
  • Amazon says its extensions of useful life estimates in 2020 and 2021 raised operating income by $2 billion 2021, and will likely mean a $3.1 billion improvement for 2022.

Longer server life will also matter in navigating the supply chain challenges in the chip sector, which are expected to persist into the second half of 2022. The limited availability of new chips has created delivery challenges for many industries including automakers and equipment vendors.

Advanced Monitoring Creates Optimization Opportunities

The component shortages in the semiconductor sector may make it harder to provision new chips and maintain equipment refresh cycles. But for cloud providers, that could be an opportunity as well as a challenge. An IDG Insight survey of 400 IT leaders found that 44% planned to shift application processing requirements to the cloud as a way to mitigate supply chain disruptions.

Hyperscale providers like AWS, Microsoft and Google run some of the largest and most sophisticated data centers on the planet, with sensors monitoring energy use, temperature and many other variables in their data halls, as well as tracking the inventory of equipment. As a result, they can conduct detailed analyses of server reliability and useful life, and use that feedback in the design of their equipment and facilities.

Free Resource from Data Center Frontier White Paper Library

Cloud-Based Gaming Company Case Study
A new white paper from Aligned presents a case study of their multi-year colocation partnership with a global cloud-based gaming company. The report outlines the challenges presented by the client, the solutions provided by Aligned, and three of the key business results achieved by the partnership.
We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

Get this PDF emailed to you.

This has helped cloud operators improve their energy efficiency by running data centers at warmer temperatures. In the early days of the IT industry, data centers often resembled meat lockers due to the belief that cooler temperatures extended the life of servers and storage units.

Microsoft was among the pioneers in dispelling this belief. In 2008, Microsoft’s data center team placed a rack of servers in a tent in their equipment yard for 8 months with no equipment failures. Intel and Google found similar results.

That prompted hyperscale operators to raise the temperature in their server rooms above the standard range of 68 to 72 degrees. Google began operating in the 80s, and Microsoft said it runs data halls in its Dublin facility at up to 95 degrees.

Microsoft now plans to operate its data centers at even warmer temperatures, enabling it to dramatically reduce its water use and environmental impact. The company isn’t announcing a target temperature range, saying it is “continuing research to determine what the final set point will be, before additional evaporative cooling is needed to preserve server reliability.”

Microsoft is also researching other ways to extend the life of servers, including operating them in sealed nitrogen environments. In 2020, Microsoft said servers housed for two years on the ocean floor were “substantially more reliable” than those in traditional data centers, suggesting a strategy to improve land-based data center economics.

A nitrogen atmosphere is less corrosive than oxygen, and Microsoft says the absence of people to bump and jostle components may also help explain the difference. The findings could guide future designs for server deployments where a nitrogen environment could be used, such as unstaffed edge data centers or facilities using robotics to manage IT equipment.

LinkedinTwitterFacebookSubscribe
Mail

Tagged With: Amazon Web Services, Google, Microsoft

Newsletters

Stay informed: Get our weekly updates!

Are you a new reader? Follow Data Center Frontier on Twitter or Facebook.

About Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Twitter

Voices of the Industry

Understanding the Differences Between 5 Common Types of Data Centers

Understanding the Differences Between 5 Common Types of Data Centers No two are data centers are alike when it comes to design or the applications and data they support with their networking, compute and storage infrastructure. Shad Secrist of Belden outlines the differences between 5 of the most common types of data centers including edge, colocation and hyperscale.

White Papers

Gaming

From Console to Cloud

This white paper from Iron Mountain explores the current challenges, drivers, and opportunities for gaming digital infrastructure.

Get this PDF emailed to you.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper you are agreeing to our terms of service. You can opt out at any time.

DCF Spotlight

Data center modules on display at the recent Edge Congress conference in Austin, Texas. (Photo: Rich Miller)

Edge Computing is Poised to Remake the Data Center Landscape

Data center leaders are investing in edge computing and edge solutions and actively looking at new ways to deploy edge capacity to support evolving business and user requirements.

An aerial view of major facilities in Data Center Alley in Ashburn, Virginia. (Image: Loudoun County)

Northern Virginia Data Center Market: The Focal Point for Cloud Growth

The Northern Virginia data center market is seeing a surge in supply and an even bigger surge in demand. Data Center Frontier explores trends, stats and future expectations for the No. 1 data center market in the country.

See More Spotlight Features

Newsletters

Get the Latest News from Data Center Frontier

Job Listings

RSS Job Openings | Pkaza Critical Facilities Recruiting

  • Critical Power Energy Manager - Data Center Development - Ashburn, VA
  • Site Development Manager - Data Center - Ashburn, VA
  • Data Center Facility Operations Director - Chicago, IL
  • Electrical Engineer - Senior - Dallas, TX
  • Mechanical Commissioning Engineer - Calgary, Alberta

See More Jobs

Data Center 101

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center 101: Mastering the Basics of the Data Center Industry

Data Center Frontier, in partnership with Open Spectrum, brings our readers a series that provides an introductory guidebook to the ins and outs of the data center and colocation industry. Think power systems, cooling, solutions, data center contracts and more. The Data Center 101 Special Report series is directed to those new to the industry, or those of our readers who need to brush up on the basics.

  • Data Center Power
  • Data Center Cooling
  • Strategies for Data Center Location
  • Data Center Pricing Negotiating
  • Cloud Computing

See More Data center 101 Topics

About Us

Charting the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there. We tell the story of the digital economy through the data center facilities that power cloud computing and the people who build them. Read more ...
  • Facebook
  • LinkedIn
  • Pinterest
  • Twitter

About Our Founder

Data Center Frontier is edited by Rich Miller, the data center industry’s most experienced journalist. For more than 20 years, Rich has profiled the key role played by data centers in the Internet revolution. Meet the DCF team.

TOPICS

  • 5G Wireless
  • Cloud
  • Colo
  • Connected Cars
  • Cooling
  • Cornerstone
  • Coronavirus
  • Design
  • Edge Computing
  • Energy
  • Executive Roundtable
  • Featured
  • Finance
  • Hyperscale
  • Interconnection
  • Internet of Things
  • Machine Learning
  • Network
  • Podcast
  • Servers
  • Site Selection
  • Social Business
  • Special Reports
  • Storage
  • Sustainability
  • Videos
  • Virtual Reality
  • Voices of the Industry
  • Webinar
  • White Paper

Copyright Data Center Frontier LLC © 2022