Intel: It’s Time to Rethink the Data Center

May 2, 2018
At DCD Enterprise, Jason Waxman outlined Intel’s vision for a software-defined “composable” infrastructure that can optimize hardware resources for new workloads.

NEW YORK – Next-generation technologies are about to change the world, and will change the data center in the process, according to Intel’s Jason Waxman.

“We’re on the precipice of another transformation in the data center,” said Waxman, the VP and General Manager of the Cloud Platforms Group at Intel. “The amount of data we will see generated in the next two years will be 10 times the data that has been generated so far. It’s time to rethink the data center.”

Waxman was the keynote speaker at the DCD Enterprise conference, which kicked off Tuesday at the Marriott Marquis in New York. He outlined Intel’s vision for a software-defined “composable” infrastructure that can optimize hardware resources for new workloads.

“We have to move to a rack-scale infrastructure composed of pools of different resources,” said Waxman. “This is a fundamental shift.”

Abuzz About the Edge

The big themes at DCD Enterprise were the rise of edge computing and the evolution of the corporate data center to support hybrid cloud infrastructure. A pre-conference was dedicated to edge computing, but the topic infused many of the sessions in the enterprise sessions as well.

The thread running through nearly all the discussions was the ongoing disruption of the data center industry by the digital transformation.

There’s plenty more change on the horizon, Waxman said. Intel has outlined a “data center first” world view, predicting that the growth of cloud computing will be accelerated by new technologies like artificial intelligence (AI), the Internet of Things, virtual reality, drones, robots and autonomous vehicles.

Waxman said three themes that will drive these trends:

  • The continuing shift of IT workloads to the cloud
  • The impending arrival of 5G networks and ubiquitous wireless
  • The growth of AI and analytics, as businesses seek to extract business value from their data.

Waxman reaffirmed his belief that by 2025, 80 percent of all workloads will run in hyperscale data centers. “In the future, you will either be computing at scale, or paying someone else to compute at scale,” he said.

Intel’s response to this trend is an emphasis on rack scale design, an architecture that disaggregates compute, storage, and network resources, and uses orchestration software to pool and manage these resources.

A Disaggregated, Composable Future

Waxman said this approach will create a more flexible infrastructure, allowing users to assign resources “on the fly.” Each rack will contain building blocks of servers, storage nodes and even hardware accelerators like GPUs and FPGAs.

“We need to move to a world where orchestration layers can look at the resources in the rack, and compose the virtual machines that are needed,” said Waxman.

The “New York State of Mind” was embraced by Motivair with its Statue of Liberty themed cooling door at DCD Enterprise. (Photo: Rich Miller)

Intel began laying put its vision for rack scale design back in 2013, arguing for an approach that breaks apart common elements of servers and organizing them in pools, rather than residing on the same motherboard. This disaggregated approach is made possible by advances in networking (including Intel’s silicon photonics).

Waxman said that a shift to rack-scale design will include pools of accelerators, class of hardware that provides a performance bump to CPUs in compute-intense processing tasks common in artificial intelligence and high performance computing (HPC). The primary beneficiary of the rise of accelerators has been NVIDIA, whose GPUs are optimized for parallel processing. But Intel sees an opportuntiy to play in this space with its FPGA offerings based on its acquisition of Altera.

He also predicted that smart networking fabrics will be critical in tying together resources across large data centers and larger campuses.

“Our ability to deploy hyperscale data centers efficiently will determine the course of the data center, and how many we end up needing,” said Waxman.

The Enterprise Data Center is Alive and Well

The DCD New York event has traditionally focused on enterprise computing, which now means sorting out the role of enterprise workloads in the age of cloud.

In a plenary talk, data center thought leader Mark Thiele noted that there are many reasons that enterprise users continue to own their own data centers. These include:

  • The need to keep a close eye on your most critical assets.
  • Protecting assets and data from partners.
  • For some companies, data centers are seen as core to their business.
  • Some companies feel they can run data centers better than service providers.

In many cases, ownership may not mean building a new on-premises data center, but moving assets to a dedicated environment in a colocation facility. Thiele outlined this trend, which will likely support colocation growth for years to come, in a recent LinkedIn post.

Thiele has long been an advocate of cloud computing, but said that companies that decide to own should be guided by a “living strategy” linking capacity to corporate objectives.

“All of us love the next shiny thing, but the shiny thing must follow the ‘why’ discussion for your enterprise,” said Thiele.

Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Courtesy of Park Place Technologies
Courtesy of Park Place Technologies

Immersion or Direct-to-Chip: A Comparison of the Most Common Liquid Cooling Technologies

Which liquid cooling technology is right for your organization? Chris Carreiro, Chief Technology Officer at Park Place Technologies, compares the most common liquid cooling technologies...

White Papers

Get the full report

Achieving Energy Efficiency Goals in Data Centers

April 15, 2022
One of the challenges global data centers currently face is the need to meet the increased processing and storage needs of their customers while also making their operations more...