InfiniBand Cables Primer Overview
The InfiniBand Trade Association formed in 1999 to solve two fundamental data center issues:
The gap between CPU computing power and network speeds continued to widen—“no Moore’s Law for I/O.”
Proliferation of network types—Ethernet, Fiber Channel, and proprietary High-performance Computing (HPC)—consumed many I/O slots per server and significant rack space for switching.
Pursuit of ‘one network for messaging, storage, and HPC’ that could fulfill both small and very large data center requirements led to these InfiniBand design points:
Native support for server-to-server memory access—remote direct memory access (RDMA).
Lossless switched network—no packet drops.
Cut-through switching—a packet may begin transmission on an outbound switch port while it is still being received on an inbound port.
Operating system (OS) bypass and zero memory copies within the server.
Complete Transport Layer offload by the I/O adapter.
Scalable Link Layer (cable) performance with backwards compatibility.
A strategic vision for network performance growth.
Cables are a key element in InfiniBand performance, scalability, and future-proofing. To achieve scalable performance, InfiniBand implements a multilane cable architecture, striping a serial data stream across N parallel physical links running at the same signaling rate. Figure 2 shows three link widths of 1, 4, and 12 parallel lanes, referred to as 1X, 4X, and 12X.
Figure 2. 1X, 4X, and 12X link widths
In copper cables, each lane uses four conductors—two differential pairs, one pair for transmitting and one for receiving. Optical cables use two fibers per lane, one for transmitting and the other for receiving.
Since the inception of InfiniBand, performance has increased by a factor of 25. From the original specification single data rate (SDR) to the current version (HDR), there has been a constant increase in performance over the years. Table 1 summarizes the past, present, and planned future of InfiniBand generations.
Table 1. InfiniBand generations
Name |
Signaling Rate per Lane (Gbps) |
Effective Bandwidth for 4X link (Gbps)¹ |
Connector |
Year |
---|---|---|---|---|
SDR |
2.5 |
8 |
CX4 |
2003 |
DDR |
5 |
16 |
CX4 |
2006 |
QDR |
10 |
32 |
QSFP |
2008 |
FDR10 |
10 |
39 |
QSFP |
2013 |
FDR |
14 |
54 |
QSFP |
2013 |
EDR |
25 |
97 |
QSFP28 |
2015 |
HDR |
50 |
200 |
QSFP56 |
2019 |
NDR |
100 |
400 |
OSFP |
2021 |
XDR |
250 |
800 |
TBD |
2023² |
¹. Accounting for bit encoding overhead ². XDR is planned after 2023. |
Until the advent of 200 Gbps InfiniBand (HDR), 4X cables were the dominant width. HDR introduced a variant called HDR100 that uses two lanes at 50 Gbps to deliver a total of 100 Gbps. There are plans for a similar two link variant of NDR, NDR200. These two-lane formats effectively double switch density, reducing overall costs.