800G is Coming Sooner Than You Think: Here’s What Data Center Operators Need to Know
By Cindy Ryborz
Published: October 27, 2022
The increased need for home offices, streaming services for games, music and movies, as well as the rise of data-intensive applications such as machine learning and artificial intelligence (AI) are just a few examples of the many factors contributing to rising bandwidth demand. These developments pose challenges for hyperscalers as well as enterprise and colocation data centers as, in addition to demand for increased capacity and lower latencies, they must include climate targets and other sustainability considerations in their buildouts.
While 400G Ethernet optical transceivers are used predominantly in hyperscale data centers, and many enterprise businesses are currently operating on 40G or 100G, data center connectivity development is already moving towards 800G. The question is not whether data center operators need to upgrade to meet the increasing demand for bandwidth, but when and how.
The good news is that with a flexible infrastructure, it is possible to upgrade from 100 to 400 to 800G with surprisingly few changes. Here’s what data center operators need to know.
Network design is becoming increasingly complex
Higher data rates increase the complexity of solutions and offerings. For network operators, it can often be a challenge to keep track and choose the right technology and network components for their needs. Requirements for increasing bandwidths in network expansions often conflict with a lack of space for additional racks and frames or costs incurred as a result.
Network equipment suppliers are therefore constantly working on new solutions to enable more density within the same space and to keep the network design scalable and at the same time as simple as possible.
One way to cut through the complexity is to make more efficient use of existing switch architectures (High Radix ASICS). For example, 32-port switches offer up to 12,800 Gb/s bandwidth (32 x 400G), and versions for 800G transmissions of up to 25,600 Gb/s are also available. These high-speed ports can be easily broken out into smaller bandwidths. This enables more energy-efficient operation while increasing the packing or port density (32 x 400G = 128 x 100G).
It’s not necessarily a matter of fully utilizing 800G for each port, but of supporting the bandwidth requirements of the end devices. Examples of this are Spine-Leaf connections with 4 x 200G or Leaf-Server connections with 400G ports, operated as 8 x 50G ports, which at the same time makes the network much more energy efficient. To achieve this, a variety of solutions exist, as well as new transceiver interfaces.
LC duplex and MPO/MTP® connectors (12fibers) are the well-known interfaces for transmission speeds of 10, 40 and 100G. For higher data rates such as 400G and 800G and beyond, additional connector types such as MDC, SN and CS (Very-Small-Form-Factor connectors), as well as MTP/MPO connectors with 16 fibers in a single row or 24 fibers (2 rows of 12 fibers) have been introduced.
Port breakout applications can increase sustainability
In addition to a better utilization of the high-speed ports and the associated port density, port breakout applications can also positively influence the power consumption of the network components and transceivers.
The power consumption of a 100G duplex transceiver for a QSFP-DD is about 4.5 watts, while a 400G parallel optical transceiver operated in breakout mode as 4 ports with 100G each consumes only 3 watts per port. This equates to savings of up to 30 percent, notwithstanding the additional savings in air conditioning/cooling and switch chassis power consumption and their contribution to space savings.
Effects on the network infrastructure
In addition to the selection of a granular, scalable backbone, it is also important to plan sufficient fiber reserves for future upgrades or to implement expansions with the least possible change effort.
With sufficient fiber reserve planned, network adjustments can be implemented by replacing only a few components: for example, an upgrade from 10G to 40/100G or 400/800G can be implemented by replacing MPO/MTP to LC modules and LC duplex patch cords with MTP adapter panels and MTP patch cords without making any changes to the backbone (fiber plant).
Scalable use of the backbone or trunk cabling is given when the lowest common multiple serves as the basis. For duplex applications, this would classically correspond to "Factor 4", i.e. Base-8 cabling, on the basis of which -R4 or -R8 transceiver models can be mapped. This type of cabling thus supports both current technologies and future developments.
Modular fiber housings also allow a mix of different technologies and the integration of new connector interfaces (very-small-form-factor connectors) with a few simple steps. Options for termination are already available today: 8-, 12-, 24- and 36-fibre modules. The use of bend-insensitive fibers also helps to make the cabling infrastructure durable, reliable and fail-safe.
Being prepared for 800G pays off
Data rates of 400G or 800G are still a long way off for most enterprise data center operators, but bandwidth demand is growing, and fast. Sales of 400G and 800G transceivers are already on the rise, and it’s beneficial to be prepared, rather than having to upgrade later under time pressure. Data center operators can make their facilities ready for 400G and 800G now, with just a few changes, to be optimally prepared for the future.
To learn more, watch our webinar, The Road to 800G below: