SANTA CLARA, Calif. – At Intel’s data center, things are looking up. Literally.
The chipmaker is reaching new heights in data center design, building up instead of out with an extra-tall rack that packs more servers into the same space. The new racks, which stand more than 9 feet tall, are central to a new Intel design that combines extreme density and energy efficiency.
Intel is using data center design to boost the efficiency and economics of its huge internal IT operation, which runs more than 130,000 servers in 59 data centers around the globe. The company is transforming a former semiconductor fabrication plant on its Santa Clara campus into a data center whose efficiency and power can match the largest hyperscale server farms.
The former fab, where hundreds of workers clad in “bunny suits” once created Atom chips, has become a testbed for data center innovation. It may soon be one of the busiest data centers in Silicon Valley, as Intel plans to bring 30 megawatts of compute capacity online over the next five years.
In a data hall inside the fab, Intel houses servers in streamlined custom racks that are 60 rack units (60U) in height. The design boosts the vertical and horizontal server capacity of each rack, allowing Intel to pack up to 43kW of servers in a single rack. To manage that density, Intel uses a hot aisle containment system and evaporative cooling.
Extreme Density Meets Extreme PUE
Intel’s newest data hall operates at a Power Usage Effectiveness (PUE) of 1.06, equaling the best facilities from Facebook and Google on that metric, which is the leading barometer of data center energy efficiency. It supports 5 megawatts of compute capacity in just 5,000 square feet of space, a power density of 1,000 watts per square foot.
The facility represents a substantial improvement from the previous generation of Intel’s data centers, which use 25,000 square feet to support the same 5 megawatts.
Data Center Frontier had the opportunity to tour the new facility with Shesha Krishnapura, the CTO & Senior Principal Engineer for Intel IT, who oversees the company’s internal data center operations. Ninety percent of that data center capacity is used to power Intel’s silicon design and research operation.
Intel’s new 60U rack design is much taller than a standard server rack, which stands seven feet high and can accommodate 42 units (42U) of rackmount server equipment in a 19-inch wide slot. The Intel racks work with standard servers, switches and power supplies from leading vendors. Neworking switches are located at the bottom of the rack rather than the top-of-rack position used in traditional designs, which would be difficult to maintain atop the taller rack.
While supporting 19-inch servers, the racks are more compact around the perimeter, resulting in a narrower footprint that also contributes to higher density. This allows Intel to house 30 racks in the same amount of space previously used by 25 racks.
“We’re not wasting any space,” said Krishnapurna.
While the rack designs are cool, the servers need to be cool as well. Intel is using several cooling strategies to manage for both density and energy efficiency.
Several Approaches to Cooling Efficiency
The 60U rack systems reside in a custom hot-aisle containment enclosure that houses 24 racks on each side, with sliding doors at the ends for access. As the doors open, a blast of hot air hits you in the face, revealing the key to the cooling design: air pressure.
The enclosure has an extra-wide 10-foot hot aisle. At high workloads, the waste heat from the 60U racks move the air up and through a cooling unit housed in the top of the enclosure. When load is lower, large fans in top of hot aisle are activated and help exhaust the heat.
As air exits the cooling unit, diffusers above the enclosure direct the airflow back around into the room. During our visit, the room-level “cold aisle” temperature was 83, while the hot aisle was 108. Temperatures can range slightly higher or lower,
The ability to operate at higher room temperatures also allows Intel to bring discipline to its water cooling loops, with supply water for the cooling unit ranging between 78 and 80 degrees, and return water at 82 to 87 degrees F. That return water goes to a heat exchanger, which is cooled by a grey water loop supported by cooling towers outside the building. Using recycled grey water reduces use of potable, an important priority during California’s drought-driven water crisis.
Just down the hall within D2, Intel operates another high-density data center that uses fresh air cooling. The air enters the data center through overhead louvers and is directed by large variable-speed fans. The data hall houses 31,000 servers, including the Intel IT Supercomputer, currently number 81 on the Top500 ranking of the world’s most powerful supercomputers, with 1,296 server nodes in 15 racks.
The server exhaust heat rises and exits the room through a large plenum. On colder days, this warm air can be remixed to control the data hall environment.
Back to the Future in Santa Clara
Innovation in data center cooling is nothing new for Intel. The chipmaker was an early adopter of airflow containment systems,
Krishnapura sees the Santa Clara fab, known as D2, as a tool to unlock the future of Intel’s internal data center and IT operations, which in turn powers the company’s innovation in chip design and computing power. The 690,000 square foot, five-story building is next to the Robert Noyce Building, Intel’s global headquarters.
Fabrication plants are environmentally controlled facilities where semiconductors are produced in “clean rooms,” free of contamination that can occur when tiny particles of hair or skin settle on their circuits. That’s why Intel employees wear the “bunny suits” made famous in the company’s TV ads. When D2 opened in 1989, it was the first fab to use showers to decontaminate workers before they entered the clean room.
Semiconductor production gradually migrated out of Silicon Valley and to other parts of the country and globe. D2 was closed in 2009, and large parts of the facility are empty and silent, with equipment decommissioned and removed. As you walk through the huge building, you pass through the showers, gown rooms and locker rooms once used by hundreds of Intel employees.
Walk up one story, and you encounter the massive electrical infrastructure that once supported the fab operations. It’s why retired semiconductor factories have provided retrofit opportunities for data center companies.
The trend got underway in earnest when newcomer Fortune Data Centers acquired a former Seagate fab in San Jose, converted it into a data center, and landed a lease with a major hyperscale tenant. Other fab retrofits soon followed:
- In 2009, Next Generation Data completed a conversion of a former LG semiconductor factory in Wales into a massive 750,000 square foot data center.
- In 2010, QTS bought a former fab from memory maker Qimonda near Richmond in a bankruptcy auction. The 1 million square foot facility now hosts numerous government and enterprise tenants.
- QTS followed a similar template in its 2013 acquisition of a former fab in Irving, Texas, and has since filled the first phase of its Dallas data center.
At Intel in Santa Clara, the fab retrofit opportunity hits close to home. “So many people worked in these fabs,” said Krishnapura.
As he walks the halls of D2, Krishnapura sees the facility as a growth engine for Intel, which anticipates adding servers at a steady rate to support its R&D.
Case in point: A 30,000 square foot room next to the existing data halls is being prepared for the next phase of data center expansion. It will house a 30 megawatt data center, supporting 1,000 watts per square foot. Intel is working with Silicon Valley Power, the Santa Clara municipal utility, to ensure enough power to support the growth of Intel’s data center.