Sturdier Servers: Cloud Platforms Say Servers Living Longer, Saving Billions

Feb. 14, 2022
The top cloud platforms are extending the life of their server and storage gear, citing studies of equipment reliability and refinements in software. AWS now says its servers have a ‘useful life” of five years, while Google and Microsoft expect servers to last for four years.

As of 2020, the estimated lifespan of a server on a cloud computing platform was three years, an assumption embraced by all three of the leading platforms. But that has changed over the past two years, with all three cloud providers extending their estimates for how long they’ll use their IT equipment.

Amazon Web Services now says its servers have a ‘useful life” of five years, while Google and Microsoft expect servers to last for four years.

These estimates are developed for accounting purposes, and can shift billions of dollars to a company’s bottom line by deferring spending on hardware and enabling assets to be expensed over a longer period. But the trend toward more durable cloud servers is also likely influenced by design refinements, and may help manage server fleets differently to support continued growth during supply chain challenges in the semiconductor industry.

‘Tremendous Effort’ to Extend Server Life

AWS has bumped up its estimate of server lifespan in each of the past two years, as noted in The Register, citing “continuous improvements in our hardware, software, and data center designs.” Cloud providers design their own gear and data centers, which helps them optimize the performance and energy efficiency of their hardware. Longevity has perhaps been less of an emphasis, given the expectation that these providers will refresh their servers every three years to upgrade to more powerful chips.

But that’s changing, according to Amazon Chief Financial Officer Brian Olsavsky.

“Although we’re calling out an accounting change here, this really reflects a tremendous team effort by AWS to make our server and network equipment last longer,” Olsavsky said in Amazon’s 4th quarter earnings call. “We’ve been operating at scale for over 15 years, and we continue to refine our software to run more efficiently on the hardware. This then lowers stress on the hardware and extends the useful life, both for the assets that we use to support AWS’ external customers as well as those used to support our own internal Amazon businesses.”

When you run millions of servers and storage units, modest changes in their cost and lifespan can become a powerful tool to wring expenses out of IT operations. That’s reflected in data points from the companies’ 10K filings.

  • In July 2020, Microsoft extended the estimated useful life from 3 years to 4 years for servers, and from 2 years to 4 years for network equipment. The company estimated the change would boost operating income by $2.7 billion for fiscal year 2021.
  • In January 2021, Alphabet adjusted the estimated useful life of servers from 3 years to 4 years and network equipment from 3 years to 5 years. The company said the change would boost operating income by $2 billion for 2021.
  • Amazon says its extensions of useful life estimates in 2020 and 2021 raised operating income by $2 billion 2021, and will likely mean a $3.1 billion improvement for 2022.

Longer server life will also matter in navigating the supply chain challenges in the chip sector, which are expected to persist into the second half of 2022. The limited availability of new chips has created delivery challenges for many industries including automakers and equipment vendors.

Advanced Monitoring Creates Optimization Opportunities

The component shortages in the semiconductor sector may make it harder to provision new chips and maintain equipment refresh cycles. But for cloud providers, that could be an opportunity as well as a challenge. An IDG Insight survey of 400 IT leaders found that 44% planned to shift application processing requirements to the cloud as a way to mitigate supply chain disruptions.

Hyperscale providers like AWS, Microsoft and Google run some of the largest and most sophisticated data centers on the planet, with sensors monitoring energy use, temperature and many other variables in their data halls, as well as tracking the inventory of equipment. As a result, they can conduct detailed analyses of server reliability and useful life, and use that feedback in the design of their equipment and facilities.

This has helped cloud operators improve their energy efficiency by running data centers at warmer temperatures. In the early days of the IT industry, data centers often resembled meat lockers due to the belief that cooler temperatures extended the life of servers and storage units.

Microsoft was among the pioneers in dispelling this belief. In 2008, Microsoft’s data center team placed a rack of servers in a tent in their equipment yard for 8 months with no equipment failures. Intel and Google found similar results.

That prompted hyperscale operators to raise the temperature in their server rooms above the standard range of 68 to 72 degrees. Google began operating in the 80s, and Microsoft said it runs data halls in its Dublin facility at up to 95 degrees.

Microsoft now plans to operate its data centers at even warmer temperatures, enabling it to dramatically reduce its water use and environmental impact. The company isn’t announcing a target temperature range, saying it is “continuing research to determine what the final set point will be, before additional evaporative cooling is needed to preserve server reliability.”

Microsoft is also researching other ways to extend the life of servers, including operating them in sealed nitrogen environments. In 2020, Microsoft said servers housed for two years on the ocean floor were “substantially more reliable” than those in traditional data centers, suggesting a strategy to improve land-based data center economics.

A nitrogen atmosphere is less corrosive than oxygen, and Microsoft says the absence of people to bump and jostle components may also help explain the difference. The findings could guide future designs for server deployments where a nitrogen environment could be used, such as unstaffed edge data centers or facilities using robotics to manage IT equipment.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

SeventyFour / Shutterstock.com

Improve Data Center Efficiency with Advanced Monitoring and Calculated Points

Max Hamner, Research and Development Engineer at Modius, explains how using calculated points adds up to a superior experience for the DCIM user.

White Papers

Get the full report.

Reimagine Enterprise Data Center Design and Operations

April 27, 2022
Future Facilities explores how digital twin technology can be used to virtualize and fine tune data center design.