The Key to Ensuring Uptime and Performance in Data Centers

March 8, 2024
Nicholas Cole, Data Center Solution Manager at EXFO, explains the importance of fiber-optics testing in maintaining data center performance.

Data centers are the backbone of today’s numerous critical applications, from autonomous vehicles and remote surgery to financial services and government intelligence operations. Data centers play a key role for applications utilizing artificial intelligence (AI), augmented reality/virtual reality (AR/VR), and big data, which require high bandwidth, low latency and highly reliable connections.

These data centers in turn rely on their network infrastructure. But how is the infrastructure managed? What happens when performance drops and systems become sluggish? Or worse, when the alarms start to sound and the lights stop blinking? These problems are just a typical day in the life of a data center network technician.  

Challenges in maintaining high-performance data centers

Network infrastructure issues can be broadly grouped into two categories: components or links. Links are obviously made up from components, but issues here can be related to design and the needs of different networking technologies. The key challenge for engineers and technicians is to work out how best to arrange and manage network infrastructure to maximize compute capabilities. Let’s look at some common issues—working our way back from the most important asset, the server.  

Servers are connected to the network either by copper or optical based transceivers with equipment cords to access or leaf switches. Optical fiber connectivity is often preferred for connections above 10G and the most common issues here is contamination through human handling. Fiber is made from silica glass and when dirty, light is unable to pass effectively leading to link failures. Fiber inspection scopes with interchangeable tips become critical to viewing connector endface quality on equipment cords and transceivers which often contain dual fibers (TX/RX).

Optical transceivers and active optical cables (AOC) can also be a source of failure especially as component specifications can be relaxed to drive volume-based cost reductions. Network testers that can verify electrical and optical properties can weed out problems such as the I/O interface, optical TX and RX power, bit-error-rate, skew, electrical power consumption or temperature. 

As we move up the network into the switching fabric, links become more complex as fiber cabling can connect through multiple patching racks within and between data halls sometimes located across multiple buildings. These high fiber cables often follow structured cabling design principles and can be terminated in multi-fiber connectors such as MPO. Transceiver optics here may use parallel fibers to multiplex high data rates at 100/200/400G across 8 or more fibers. This present issues with fiber polarity and loss/length limitations due to those high data rates and modulation schemes.    

Locating issues within less complex cabling routes within buildings or campus environments can be done with optical loss tests sets (OLTS) using polarity mapping, power meter and visual fault locating functions that interface single-, duplex- or MPO-based connector interfaces. However, with more complex routes, optical fiber multimeters (OFM) and optical time domain reflectometers (OTDR) can identify and locate issues such as bad connections, splices, bends or breaks.  

Data center interconnections (DCI) present more advanced issues as optical and Ethernet technologies become more complex. Coherent optics have replaced transport systems for many metro and edge applications which require technicians to understand optical line systems used for dense wavelength-division multiplexing (DWDM) across pairs of fibers. Troubleshooting issues is aided with the use of Ethernet testers, optical spectrum analyzers (OSA), high-performance OTDRs and dispersion testers if the optical fiber path is suspected.  

Conclusion

The role of fiber-optics testing in maintaining data center performance cannot be overstated. As the reliance on data centers continues to grow, so does the importance of ensuring their reliability and efficiency. By investing in high-quality testing equipment and adhering to a rigorous maintenance protocol, data center operators can ensure uptime, optimal performance, and ultimately, customer satisfaction in an increasingly data-driven world.

About the Author

Nicholas Cole

Nicholas Cole is Data Center Solution Manager for EXFO. He is a member of the British Standards Institution for fiber optics and works with fiber optic test equipment and cabling systems in a variety of roles across sales, marketing, and product management. Contact EXFO to learn more about their testing, monitoring, and analytics services. 

 

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Meta AI
Source: Meta AI

Data Centers and Renewable Energy: A Sustainable Future

Kevin Imboden, Global Director of Market Research & Intelligence for EdgeConneX, explains how harnessing renewable energy resources can significantly reduce the carbon footprint...

White Papers

Dcf Cadence Decision Wp Cover 2023 01 11 17 29 43

Data Center Decision Makers

Jan. 11, 2023
An In-Depth Look at the Challenges and Opportunities Now and Ahead