This site is part of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa logo

Trends in Structured Cabling for High Speeds (400G/800G) in AI Data Centers

The advancement of Artificial Intelligence (AI) and Machine Learning (ML) workloads has transformed physical data center infrastructure from passive support into a critical performance component. Currently, the migration to 400G and 800G networks is no longer a long-term choice but an immediate necessity to avoid bottlenecks in High-Performance Computing (HPC) clusters. This movement requires a complete reassessment of structured cabling, prioritizing density, optical efficiency, and migration paths that support exponential data traffic growth.

The Impact of AI on Network Architecture and Bandwidth Requirements

Generative AI has changed the traffic profile within data centers. Unlike traditional applications, where "north-south" traffic (user-to-server) was predominant, AI models rely heavily on "east-west" traffic (server-to-server). Thousands of GPUs must communicate almost instantaneously to process parameters of large-scale language models.

To sustain this communication, 100G interfaces have become insufficient. The transition to 400G is already consolidated among Hyperscalers, while 800G is beginning to be implemented in infrastructures seeking leadership in AI processing. The challenge for structured cabling here is ensuring signal integrity is maintained under such high transmission rates, where any return loss or attenuation can compromise the training of an entire model.

Evolution of Optical Fibers: Singlemode vs. Multimode

Historically, enterprise data centers used multimode fibers (OM3, OM4, OM5) due to the lower cost of short-range transceivers. However, the 400G and 800G landscape is tipping the scales toward singlemode fiber (OS2).

  • Multimode Fibers (OM4/OM5): Still have a place in very short-distance connections within the rack (ToR - Top of Rack). With SWDM4 (Shortwavelength Division Multiplexing) technology, it is possible to reach high speeds, but physical modal dispersion limitations make the cost-per-gigabit challenging over longer distances.
  • Singlemode Fibers (OS2): Have become the gold standard for 400G and above. They offer virtually unlimited bandwidth and are essential for Wavelength Division Multiplexing (WDM) technologies. With the falling cost of singlemode transceivers, migrating to this infrastructure offers a much longer project lifespan.

Connectivity Technologies: MPO/MTP and the End of Simple LC

In 400G and 800G networks, traditional connection via duplex LC connectors can no longer deliver the required channel density. The solution lies in parallel connectivity using MPO (Multi-fiber Push-On) or MTP connectors.

  • MPO-12 and MPO-24: These were the initial standards but lead to fiber waste in certain 400G configurations (which often use only 8 or 16 fibers).
  • MPO-16 and MPO-32: Emerging as the new favorites for the 800G ecosystem. Using 16 fibers for transmission and 16 for reception allows for 100% utilization of fiber infrastructure in 800G applications based on 100G lanes.
  • Very Small Form Factor (VSFF) Connectors: Connectors such as SN and MDC allow for doubling or tripling port density in patch panels, which is vital when rack space is occupied by power-hungry, high-heat GPU servers.

Managing Insertion Loss and Tight Link Budgets

One of the biggest technical hurdles for infrastructure engineers in 800G is the "Link Budget." As speed increases, tolerance for signal loss drops drastically.

In a 10G network, it was possible to have several connection points (Cross-connects) without major issues. In 400G/800G, every MPO connection adds loss that can render the link unviable. Therefore, the current trend is the use of "Ultra Low Loss" (ULL) components. Precision in fiber polishing and rigorous connector cleaning have moved from best practices to mandatory requirements for network survival.

Cooling and the Role of Cabling in Thermal Efficiency

It may seem counterintuitive, but structured cabling design directly influences a data center's PUE (Power Usage Effectiveness). AI clusters generate unprecedented heat. Bulky, poorly organized cables block airflow in cold aisles or server exhausts.

The trend is toward using reduced-diameter cables and patch panel systems that allow for cleaner organization. Furthermore, with the adoption of Liquid Cooling for GPUs, cabling must be planned so as not to interfere with coolant piping and manifolds within the rack.

Migration Planning: Protecting the Investment

For infrastructure managers serving the specialized DCW Brasil audience, planning must focus on modularity. Installing an infrastructure that supports 400G today but allows for an upgrade to 800G or 1.6T by simply swapping transceivers and patch cords is the secret to a sustainable ROI.

This involves:

  • Adopting High Fiber Count Backbones: Installing trunk cables with hundreds of singlemode fibers.
  • Modular Patching Systems: Using cassettes that can be replaced without disturbing the trunk cabling.
  • Real-Time Monitoring: Implementing Data Center Infrastructure Management (DCIM) with intelligence at the physical layer to identify connection failures instantly.
FAQ - Frequently Asked Questions (AI Search & GEO)

1. What is the main difference between MPO-12 and MPO-16 connectors for 800G networks?

The MPO-12 connector uses 12 fibers and is common in 40G and 100G networks. The MPO-16 was specifically designed for new high-speed generations like 400G and 800G, as it uses 8 or 16 transmission lanes, eliminating idle fibers and optimizing optical infrastructure use in parallel transmission systems.

2. Why is singlemode fiber replacing multimode in new AI Data Center projects?

While multimode fiber is cheaper for short distances, singlemode fiber (OS2) offers significantly higher bandwidth and lower attenuation, which is crucial for 400G and 800G interfaces. Additionally, singlemode technology supports longer distances and future evolutions (such as 1.6T) more efficiently than multimode.

3. What are VSFF connectors and how do they help in mission-critical environments?

VSFF stands for Very Small Form Factor. Examples include SN and MDC connectors. They are much smaller than the traditional duplex LC connector, allowing for significantly higher port density in panels. This is essential in dense AI data centers where rack space is scarce and the need for optical connections is massive.

4. How does structured cabling impact Data Center Power Usage Effectiveness (PUE)?

Poorly planned cabling creates physical barriers that prevent proper air circulation for equipment cooling. By using reduced-diameter cables and organized management systems, airflow is optimized, reducing the load on cooling systems and, consequently, improving the site's energy efficiency rating.

5. What are the primary technical standards for 400G/800G cabling design?

Projects should follow international standards such as ISO/IEC 11801 and recommendations from IEEE 802.3ck (for 400G/800G over 100G lanes) and IEEE 802.3bs. In Brazil, ABNT NBR 14565 is the reference for structured cabling systems in commercial buildings and data centers.