Navigating the Future: Upgrading Networks in Data Centers for 400G  

April 26, 2024
Nicholas Cole, Data Center Solution Manager at EXFO, explains why the journey towards 400G and beyond is not merely about keeping pace but also ensuring that every step forward is underpinned by reliable, efficient and forward-looking testing practices.

The journey toward 400G and beyond  

The surge in data creation spurred by AI, 5G and connected devices are exponentially increasing the complexity and volume of data processed by data centers. According to McKinsey, around 463 exabytes (i.e., 463,000,000 terabytes) of data will be produced globally—every day in 2025.  An estimated 19 billion devices connected to the internet and expectations set to skyrocket to 135 billion by 2030. 

This incredibly huge amount of data equates to incredibly fast rates of transmission. As we embark on a transmission trajectory set towards 1.6TB through 400G and the impending 800G, data centers are pressed to adapt and upgrade their infrastructure, quickly yet reliably. However, this transition is not just about swapping out old equipment for new; it involves a meticulous compatibility and interoperability process to ensure that all components work seamlessly together. The need for speed goes hand in hand with the need for testing and monitoring solutions that can adapt or scale with upcoming waves of technology upgrades.  

The key takeaway here is that the journey towards 400G and beyond is not merely about keeping pace but also ensuring that every step forward is underpinned by reliable, efficient and forward-looking testing practices.   

Compatibility and interoperability: the core challenges 

One of the primary challenges in upgrading data centers is ensuring that all components are compatible and can operate in harmony. This is where the testing of fiber optic cabling and transceivers becomes critical. Different manufacturers may adhere to different standards, and without proper testing, this can lead to compatibility issues. For instance, a transceiver from one vendor might not work as expected with a switch from another, leading to data transmission failures or suboptimal performance. 

Furthermore, as data centers upgrade to 400G and beyond, they often integrate new technologies with existing systems. This integration can be fraught with challenges, as legacy systems might not be equipped to handle the speeds or signal types of newer technologies. Testing ensures that new components can interoperate with old ones without degrading performance and identify issues before the network is commissioned, thus saving costs and time. 

 Testing in a fiber-dense future to accommodate AI/ML 

As data centers gear up to accommodate higher transmission rates and AI/ML, the depth and density of fiber within these facilities is increasing tremendously. This shift also transforms the role of testing, from a nice-to-have step to a critical requirement for guaranteeing network uptime.  

The challenge extends to technicians who must manage, store and share vast quantities of test results, crucial for connecting various components within and between data centers. The capability to quickly access measurement data is paramount for upgrades and fault resolution, facilitating informed decision-making and promoting a smarter approach to network construction and management.  

Testing efficiency: best practices and solutions  

Connected test systems and cloud-based platforms are unlocking new levels of efficiency for testing in data center environments. These innovations offer real-time performance data and analytics, enabling network executives and managers to gain unprecedented visibility and control into construction or upgrades, track job progress across field teams and accelerate the process without compromising on quality. 

For technicians, versatile and easy-to-use test tools (e.g., high-speed testers, OTDRs, OLTS, inspection scopes) enable them to test both physical cabling and network transmission using automatically pre-configured settings that yield clear pass/fail results. A connected test ecosystem provides the capability to centralize and organize those results via the cloud. Altogether, this approach facilitates first-time-right system acceptance in alignment with rigorous industry standards such as those set by IEC, TIA and IEEE.  

This foresight in connectivity, design and test capabilities future-proofs test equipment in ensuring that data centers are well-equipped to handle the transition to higher transmission rates.    

Conclusion 

The upgrade of data centers to higher speeds like 400G and beyond is a complex and critical endeavor that underpins the digital capabilities of modern enterprises. Testing these components for compatibility and interoperability is not just a technical necessity; it is an economic and operational imperative. Incompatible or underperforming components can lead to downtime, which can be extremely costly for enterprises. By ensuring compatibility and interoperability through testing, organizations can avoid these costs and ensure that their data center upgrades lead to the intended improvements in performance and capacity whilst reducing waste. 

About the Author

Nicholas Cole

Nicholas Cole is Data Center Solution Manager for EXFO. He is a member of the British Standards Institution for fiber optics and works with fiber optic test equipment and cabling systems in a variety of roles across sales, marketing, and product management. Contact EXFO to learn more about their testing, monitoring, and analytics services. 

 

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Gorodenkoff / Shutterstock.com

Building Data Centers for Operational Excellence: Disaster Prevention Starts Long Before Shovels Hit the Ground

Terry Rodgers, Vice President of Commissioning and Building Analytics at JLL, explains why operations and maintenance teams should be involved throughout the entire data center...

White Papers

Dcf Siemon Sp Cover 2022 02 28 17 26 36 232x300

The Journey to 400/800G has Begun

March 3, 2022
Siemon explains the factors data centers should consider when determining which path to 400G they should take.