As it embarks upon a massive expansion of its cloud computing infrastructure, Microsoft has once again retooled its data center design, replacing its outdoor server modules with a more traditional facility. The new design, known as Generation 5, has been implemented in the company’s newest data center campus in Quincy, Washington.
The redesign arrives as Microsoft’s cloud computing business is experiencing rapid growth. The company is both building and buying, with active construction at company-built sites in Iowa, Washington and southern Virginia. Microsoft is also leasing enormous amounts of wholesale data center space in Northern Virginia and Silicon Valley.
The company is also introducing new technology inside its servers. This week Microsoft revealed that it is using servers accelerated by FPGAs (Field Programmable Gate Arrays) to power its Azure cloud.
Continued Infrastructure Innovation
Generation 5 continues Microsoft’s ongoing refinement of its data center design, with regular updates to include the latest innovations and best practices. The new facility has a cement slab rather than a raised floor, and servers are housed in rows of racks that are connected to a hot-aisle containment system. A fan wall on one side of the facility manages airflow into the data hall, which functions as a room-level cold aisle – although “cold” is a relative term, as Microsoft can operate its data centers with server inlet temperatures above 80 degrees.
The result is a design that takes even more cost out of the process of building data center capacity, and can be deployed quickly, as racks that are pre-populated with servers can roll off the loading dock and onto the data floor.
“It’s the culmination of all the generations that have come before,” said Christian Belady,General Manager of Cloud Infrastructure Strategy and Architecture for Microsoft Corp., who reviewed the design in a new video (registration required). “We’ve learned something each step of the way and we’ve adapted it and brought it into this design.”
In recent years Microsoft has deployed servers in enclosure called IT-PACs, that can be housed outdoors on a slab, eliminating the need for a building to protect the modules.
Updates to Cooling Design
The Generation 5 design also features a change in Microsoft’s approach to cooling, adopting cooling towers and a water-side economization system that uses outside air to cool water.
“We actually have a closed loop,” said Belady, “We don’t have to use potable water, which was an issue, particularly in a drier climate. It’s a step towards better sustainability. We’re always striving to do what’s best for the environment.”
The closed liquid loop connects to a heat exchanger in the data center, which sits behind the fan wall. The video doesn’t provide details on the type of heat exchanger Microsoft is using.
The company’s Generation 4 design used air economization, with outside air entering the IT-PAC through louvers in the side of the enclosure, which is equipped with an adiabatic cooling system, in which air is cooled as it passes through media that can be dampened with a small flow of water.
Belady says the new design will be extremely efficient, with a Power Usage Effectiveness (PUE) of 1.1 or better, a slight improvement over the Generation 4 design, which operates with a PUE of 1.1 to 1.2.
Microsoft’s Generation 5 reflects several broader trends in data center design, as a number of large service providers have embraced indirect forms of free cooling that take advantage of a cool climate without introducing outside air into the server room. Equinix, RagingWire and Oracle are among those who have made a similar shift in their latest designs.
The use of a fan wall is somewhat unusual, at least in this implementation. The closest equivalent is a Santa Clara, Calif. built by Bay Area Internet Services, which is now owned by Vxchange. Others use variants on the theme. Facebook uses a large fan wall in its cooling plenum, but doesn’t use fans at the entry of the data hall, as Microsoft does in Generation 5.
Maintaining proper air pressure throughout the data hall is important for Microsoft, which several years ago adopted a fanless server design. Server fans inside the chassis help maintain airflow and keep components cool, but also use energy.
The unveiling of a new design generation isn’t a complete surprise. Earlier this year Microsoft acknowledged that it had abandoned the use of its IT-PAC modules, saying that it could not manufacture the enclosures fast enough to keep pace with its deployment of new capacity for its cloud offerings, which include the Azure platform and software-as-a-service offerings like Office 365.
Hardware Continues to Evolve
Inside the data center, Microsoft has been deploying servers in the Azure cloud are based on designs the company created for its own use and contributed to the Open Compute Project in 2014.
“This is what’s powering Microsoft’s cloud,” said Kushagra Vaid, General Manager for Azure Cloud Hardware Engingeering, in an interview last year.
This week Microsoft revealed its growing use of FPGAs to accelerate its data flow. The programmable chips will eventually be used throughout Microsoft’s entire IT infrastructure, the company said.
Microsoft doesn’t share specifics about its total server count, but Belady said each data center houses “hundreds of thousands” of servers and can span “20 or 30 football fields” worth of space.
The company has been adding data center capacity at an unprecedented rate. Microsoft continues to simultaneously build data centers in some regions, while leasing space in others.
On the construction front, Microsoft recently announced a huge data center project in West Des Moines, Iowa where it already has built two cloud campuses. It also continues to expand in Boydton, Virginia as well as the new Generation 5 deployment in Quincy.
Huge Deals in Wholesale
Microsoft has also become the biggest player in the market for wholesale data center space, a model in which companies lease finished data halls from developers. Over the past year, industry sources say Microsoft has signed a series of large deals for wholesale space:
- In late 2015, the company leased an entire 10.4 megawatt data center building in Ashburn, Va.
- In early 2016 Microsoft leased 16 megawatts of space in Santa Clara, Calif., taking down the entire third phase of the DuPont Fabros SC1 facility.
- In early 2016, Microsoft leased a 30 megawatt data center built by CyrusOne in Sterling, Va., which borders Ashburn in the Northern Virginia market.
Data center real estate sources say Microsoft has recently leased additional space in northern Virginia, and is close to securing a site in the suburban Chicago market.
When you’re building a type of scale, design improvements that can save time and energy become even more valuable as they ripple across a larger footprint. A modest improvement in the construction cost per megawatt adds up when you’re deploying several hundred megawatts of capacity per year, a pace that Microsoft appears to have attained.