High Speed Migration: What You Need to Know to Keep Up

May 30, 2017
A dramatic shift is occurring in the enterprise data center that affects how information across the organization is stored, accessed and retrieved. In this week’s Voices of the Industry, James Young, global enterprise data center lead at CommScope, shares the ins and outs of high speed migration and what your business needs to know to keep up with fast-paced changes.

In this week’s Voices of the Industry, James Young, global enterprise data center lead at CommScope, shares the ins and outs of high speed migration and what your business needs to know to keep up with fast-paced changes. 

James Young, global enterprise data center lead at CommScope

A dramatic shift is occurring in the enterprise data center that affects how information across the organization is stored, accessed and retrieved. It’s the result of a perfect storm. The volume of data streaming into the data center has been growing exponentially for decades. Bandwidth requirements are expected to continue growing 25 percent to 35 percent per year. At the same time, lower latency requirements continue to escalate.

In response, the design of services and applications—and how they are delivered—is rapidly evolving. Instead of a single dedicated server, information requests coming into the data center are now fulfilled by multiple servers cooperating in parallel. The trend to cooperative parallel processing has been building for years and appears to have reached a tipping point in most data centers. The change is forcing a physical redesign of the data center’s network architecture.

Fig. 1: Traditional 3-tier design; Fig. 2: Spine and Leaf design

The traditional three-tier network is quickly being replaced by spine-and-leaf networks. Large amounts of data traffic have shifted away from the network core and are now carried by the spine-leaf switches. The any-to-any connectivity requires more speed and density and is increasingly becoming more complex.

As a result, the physical infrastructure must be able to support higher link speeds and greater fiber density while enabling quick and easy migration to new, more demanding applications. This is the core challenge facing data center teams today.

More solutions, support and decisions

These changes in design and infrastructure have spawned the development of new technologies designed to transmit more data using fewer resources. These include advances in optical modulation schemes, smaller and more efficient connectors, and ultra-low-loss fiber.

Industry standards are quickly evolving as well. As many data centers start to transition from 10G links in the switch layer to 25G and 40G, the standards bodies are already designing specifications for 400G links. Higher speed links drive component costs higher, making high speed migration a business challenge as well as a technical one.

Current progression of ethernet speeds

Determining the best path forward involves a number of critical decisions; the following are some of the more important ones.

40G or 25G lanes?

Until recently, the accepted migration road map outlined a progression from 10G lanes to 40G to 100G. The approval of the IEEE 802.3by standard is a clear sign that the industry has shifted to 25G. This is largely due to the fact that the newer 25G lanes—which offer easy migration to 50G (2x25G) and 100G (4x25G)—provide a better utilization of the current switch hardware, which is based on 25G switch lanes. The extreme demand for more data is driving some data centers to jump directly from 10G to 100G switch connections.

Modulation schemes

New, more efficient modulation schemes are now available to help IT managers increase capacity and bandwidth per fiber. Pulse-amplitude modulation with four amplitude levels (PAM-4) is among the most promising modulation schemes for 100G and beyond. Compared to traditional binary modulation, PAM-4 enables twice the transmission capacity for the same signaling rate. Its simplicity and low power consumption can more than offset the need for higher link budgets.

Meanwhile, early adoption of shortwave division multiplexing (SWDM) is enabling data center teams to increase the number of lanes, or wavelengths, transmitted on each fiber. SWDM supports higher link speeds while reducing the number of overall fibers needed. The result is better ROI for multimode networks.

Duplex or parallel transmission?

In the past, parallel transmission has proven to be the easiest way to support higher lane speeds, including 40G and 100G. Therefore, it is typically first to market when faster speeds are introduced. While duplex solutions may be slower to market, they can reduce overall cost and complexity in the network—eventually making them competitive with parallel technologies. This pattern should hold true for 200G and 400G Ethernet. Parallel technologies should be the early favorite, especially given the trend toward MPO connectors. With emerging modulation schemes like PAM-4 and SWDM, cost-effective duplex applications will play a key role in the near future, as well.

Singlemode, multimode or wideband multimode?

The cost of pluggable singlemode (SM) optics continues to limit the use of singlemode networks in enterprise data centers. At the same time, multimode networks offer an attractive balance of performance, density and cost in the near term, but it’s unclear how much headroom for growth they provide. Wideband multimode fiber (WBMMF), however, may serve as a feasible bridge between SMF and MMF. Recently approved under ANSI/TIA-492AAAE, WBMMF enhances the ability of SWDM to increase usable bandwidth by at least a factor of four. Compatible with OM3 and OM4 fiber, it can help extend the life of existing legacy multimode applications.

fiber/bandwidth comparison

Preterminated vs field-terminated cables?

As fiber densities increase, this decision is becoming more obvious. By some estimates, preterminated cables translate to a 90 percent time savings versus a field-terminated system and are about 50 percent faster when it comes to network maintenance.[i]  Among preterminated solutions, MPO/MTP fiber is fast becoming the de facto choice for single- and multimode connectivity due to its simplicity, speed and high density.

Keys to success

While each migration path will be different, the successful ones will have common characteristics that enable the ongoing evolution of the data center:

  • Modular, plug-and-play components that enable quick and easy upgrades with minimal redesign and disruption
  • Ultra-low-loss performance that can support the increased signal-to-noise ratio high-speed optics require
  • Automated infrastructure management (AIM) that provides real-time monitoring and control of the physical layer

A closing thought

A successful strategy to manage high speed migration starts with understanding the environment you support.  What is the application trajectory of the enterprise? How does that affect the velocity of change and scaling requirements in the data center? What is the total cost of ownership for the various migration scenarios being considered? Deploying an infrastructure that supports a broad variety of future paths is an advantage given the rapidly evolving network technologies.

[clickToTweet tweet=”A successful strategy to manage high speed migration starts with understanding the environment you support.” quote=”A successful strategy to manage high speed migration starts with understanding the environment you support.”]

And remember: You don’t have to go it alone. The amount of research and decisions involved can be mind-numbing. There are a variety of knowledgeable resources available—vendors, integrators, consultants. Take advantage of their experience, technical expertise and perspective. Together, you’ll have a better chance of developing a long-term migration strategy designed to keep your data center adaptable, capable and efficient. No matter how fast things change.

James Young is global enterprise data center lead at CommScope. As lead for CommScope’s Global Data Center  solutions team, James provides leadership to a broad based team providing engineered solutions for Data Center facilities and Data Networking systems globally.

[i] PAM-4: A Key Solution for Next-Generation Short-Haul Optical Fiber Links; Neophotonics, blog; May 11, 2015

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Julia Ardaran / Shutterstock.com

Beyond Hyperscale: Quantum Frederick's Vision for Sustainable Data Center Development

Scott Noteboom, CTO of Quantum Loophole, explains how Quantum Frederick created the first carbon neutral industrial zone.

White Papers

Thumbnail2

Choosing the Right Technology for Diesel Backup Generators

July 26, 2023
Environmental and long-term sustainability concerns are increasingly influencing our technology decisions, and that’s driving change in the market. Gone are the days of simple...