Supported Interfaces
This section describes the ConnectX-8 SuperNIC supported interfaces. Each numbered interface referenced in the figures is described in the following table with a link to detailed information.
The below figures are for illustration purposes only and might not reflect the current revision of the SuperNIC.
ConnectX-8 Model | Front View | Back View |
C8180 SuperNICs
| ![]()
| ![]()
|
C8240 SuperNICs
| ![]()
![]()
| ![]()
|
Item | Interface | Description |
ConnectX-8 Integrated Circuit | ||
1 | PCIe Gen6 through x16 edge connector | |
2 | Network traffic is transmitted through the networking connectors. The networking connectors allow for the use of modules, optical and passive cable interconnect solutions | |
3 | Two I/O LEDs per port to indicate speed and link status | |
4 | One MCIO connector is populated to allow connectivity to an additional PCIe x16 interface | |
5 | Allows for BMC connectivity for remote management |
ConnectX-8 IC
The ConnectX-8 family of IC devices delivers InfiniBand and Ethernet connectivity paired with best-in-class hardware capabilities that accelerate and secure cloud and data-center workloads.
Host Interface
ConnectX-8 SuperNIC supports PCI Express Gen6 (5.0 and 4.0 compatible) through an x16 edge connector. The following lists host interface features:
PCIe Gen6 or Gen5 (up to x32 PCIe lanes)
NVIDIA Multi-Host™ (up to 4 hosts)
PCIe switch downstream port containment (DPC) - Applies to 900-9X81E-00EX-DT0 only
MSI/MSI-X
Networking Interfaces
The SuperNIC includes special circuits to protect the SuperNIC/server from ESD shocks when plugging copper cables.
Ethernet and InfiniBand traffic is transmitted through the networking connectors (QSFP112 or OSFP) on the SuperNIC.
Protocol | Specifications |
Ethernet | The network ports comply with the IEEE 802.3 Ethernet standards in Features and Benefits. |
InfiniBand | The network ports are compliant with the InfiniBand Architecture Specification, Release 1.7. |
Networking Ports LEDs Specifications
For the description of the networking ports LEDs, follow the table below, depending on the ConnectX-8 SuperNIC you have purchased.
SKU | LEDs Scheme |
C8240 SuperNICs
| |
C8180 SuperNICs
| Scheme 2: Two LEDs |
Scheme 1: One Bi-Color LED
There is one bi-color (Yellow and Green) I/O LED per port that indicate port speed and link status.

State | Bi-Color LED (Yellow/Green) | |||||||||
Beacon command for locating the SuperNIC | 1Hz blinking Yellow | |||||||||
Error | 4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:
| |||||||||
Physical Activity | The Green LED will blink. | |||||||||
Link Up | The Green LED will be solid. | |||||||||
Physical Up (IB Only) | The Yellow LED will be solid. |
Scheme 2: Two LEDs
There are two I/O LEDs per port that indicate port speed and link status.
LED1 is a bi-color LED (Yellow and Green)
LED2 is a single-color LED (Green)

State | Bi-Color LED (Yellow/Green) | Single Color LED (Green) | |||||||||
Beacon command for locating the SuperNIC | 1Hz blinking Yellow | OFF | |||||||||
Error | 4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:
| ON | |||||||||
Physical Activity | The Green LED will blink. | Blinking | |||||||||
Link Up | In full port speed: the Green LED is solid In less than full port speed: the Yellow LED is solid | ON |
MCIO Connector
The MCIO (Multi-Channel I/O) connector in ConnectX-8 SuperNICs is a high-speed interface that provides efficient, scalable, and flexible connectivity for various data center applications. This connector supports multiple lanes of high-bandwidth data transfer, enabling faster and more efficient communication between the network card and the system or other connected components.
The 124 pins MCIO connector (SFF-TA-1016 by Amphenol) allows connectivity to an additional PCIe x16 interface or DSP devices (NVMe SSDs) via the MCIO cable. For pinouts, refer to the MCIO Interface.

Sideband Management Interface
The sideband management interface in ConnectX-8 SuperNICs enhances remote manageability, diagnostics, and maintenance capabilities, critical for high-availability environments like data centers and cloud infrastructure.
The sideband management interface (a 30-pin IPEX connector) in ConnectX-8 SuperNICs enables out-of-band management, allowing administrators to monitor and control the network device independently of regular data traffic. It supports remote monitoring, even when the host system is unresponsive, by integrating with Baseboard Management Controllers (BMC) for tasks like firmware updates, diagnostics, and health monitoring. This interface ensures continuous management of the NIC's performance, security, and status without disrupting network operations, making it vital for maintaining uptime in data centers and cloud environments.

The table below specifies the maximum trace lengths per board type. Please consider the maximum trace length on the board in your design.
SKUs | Maximum Trace Length on the Board |
C8240 SuperNIC 900-9X81Q-00CN-ST0 | 140mm (5.51 inch) |
C8180 SuperNICS 900-9X81E-00EX-ST0 900-9X81E-00EX-DT0 | 75mm (2.95 inch) |
ConnectX-8 Cable Extender Debugging Kit
An optional accessory is available for debugging purposes. The ConnectX-8 Cable Extender board provides access to the MTUSB, PPS, NCSI, EN_INB_REC, and FNP interfaces, enabling debugging.
The kit includes the extender board and a 200mm IPEX cable ( micro-coax, pin-to-pin, lock-to-lock ) that can be connected via the 30-pin connector on the ConnectX-8 SuperNIC.
OPN | Description |
930-9XCBL-000A-000 | NVIDIA ConnectX-8 200mm Cable Extender for Low-Speed Signals Over 30p Debug Connector |
