Supported Interfaces
This section describes the ConnectX-7 supported interfaces. Each numbered interface that is referenced in the figures is described in the following table with a link to detailed information.
The below figures show the component side of the NVIDIA ConnectX-7 adapter card. Each numbered interface that is referenced in the figures is described in the following table with a link to detailed information.
The below figures are for illustration purposes only and might not reflect the current revision of the adapter card.
Single port OSFP Cards |
Dual port QSFP112 Cards |
|
|
Item |
Interface |
Description |
1 |
ConnectX-7 Integrated Circuit |
|
2 |
PCIe Gen 5.0 through x16 edge connector |
|
3 |
Network traffic is transmitted through the adapter card QSFP112/OSFP connectors. The networking connectors allow for the use of modules, optical and passive cable interconnect solutions. |
|
4 |
Two I/O LEDs per port to indicate speed and link status |
|
FRU EEPROM capacity 16KB |
||
Allows BMC connectivity using MCTP over SMBus or MCTP over PCIe protocols. |
||
Voltage supply pins that feed onboard regulators. |
||
Controls the networking port logic LED (LED0) and implements the OCP 3.0 host scan chain. |
||
Dissipates the heat. |
ConnectX-7 IC
The ConnectX-7 family of adapter IC devices delivers two ports of NDR200/200GbE) or a single-port of NDR/400GbE connectivity paired with best-in-class hardware capabilities that accelerate and secure cloud and data-center workloads.
NVIDIA Multi-HostTM Support
In addition to building exceptionally high bandwidth to the data center, the ConnectX-7 device enables leveraging this speed across the entire data center utilizing its NVIDIA Multi-Host feature.
Using its 16-lane PCI Express interface, a single ConnectX-7 device can provide 400GbE interconnect for up to four independent hosts without any performance degradation.
The figure below shows a ConnectX-7 device with NVIDIA Multi-Host connected to four separate hosts, each with a PCIe x4 interface, on one side to a switch on the other side.
The below bifurcation are optional for the adapter's x16 PCIe interface:
x1 PCIe x16, x1 PCIe x8, x1 PCIe x4
x2 PCIe x8, x2 PCIe x4, x2 PCIe x2, x2 PCIe x1
x4 PCIe x4, x4 PCIe x2, x4 PCIe x1
Multi-host capable cards also support Socket-Direct applications and work as regular Single-Host cards, depending on the type of server they are plugged into, assuming the server complies with the OCP 3.0 spec.
According to the OCP 3.0 spec, the adapter card advertises its capability through the PRSNTB[3:0]# pins. The server determines the configuration through the BIF[2:0]# pins, which it drives to the adapter card.
The NVIDIA OCP3.0 card has an internal logic that uses the BIF[2:0]# data and determines the correct operating mode to boot at. The combination of the PRSNTB[3:0]# and BIF[2:0]# pins deterministically sets the PCIe lane width for a given combination of OCP 3.0 cards and baseboard. The logic and the decoding table can be found in the OCP 3.0 spec (Chapter 3.5 PCIe Bifurcation Mechanism).
For example:
the NVIDIA OCP 3.0 Multi-host adapter drives 0100 on PRSNTB[3:0]# to the server.
If Server Drivers |
Adapter PCIe Mode |
000 |
Single-Host Mode: x1 PCIe x16 |
001 |
Socket Direct Mode: x2 PCIe x8 |
010 |
Socket Direct Mode: x4 PCIe x4 |
101 |
Multi-Host Mode: x2 PCIe x8 |
110 |
Multi-Host Mode: x4 PCIe x4 |
PCI Express Interface
The table below describes the supported PCIe interface in ConnectX-7 OCP 3.0 adapter cards.
PCIe Gen 5.0 compliant, 4.0, 3.0, 2.0 and 1.1 compatible
2.5, 5.0, 8.0, 16.0 and 32GT/s link rate x16
Support for PCIe bifurcation: Auto-negotiates to x16, x8, x4, x2, or x1
NVIDIA Multi-Host™ supports connection of up to 4x hosts
Transaction layer packet (TLP) processing hints (TPH)
PCIe switch Downstream Port Containment (DPC)
Advanced error reporting (AER)
Access Control Service (ACS) for peer-to-peer secure communication
Process Address Space ID (PASID)
Address translation services (ATS)
Support for MSI/MSI-X mechanisms
Support for SR-IOV
Networking Interfaces
The adapter card includes special circuits to protect from ESD shocks to the card/server when plugging copper cables.
Protocol |
Specifications |
Ethernet |
The network ports are compliant with the IEEE 802.3 Ethernet standards listed in Features and Benefits. Ethernet traffic is transmitted through the networking connectors on the adapter card. |
InfiniBand |
The network ports are compliant with the InfiniBand Architecture Specification, Release 1.5. InfiniBand traffic is transmitted through the cards' networking connectors. |
Networking Ports LEDs Specifications
There are two I/O LEDs per port to indicate port speed and link status.
LED0 is a bi-color LED (Yellow and Green)
LED2 is a single-color LED (Green)
Single port OSFP Cards |
Dual port QSFP112 Cards |
|
|
State |
Bi-Color LED (Yellow/Green) |
Single Color LED (Green) |
|||||||||
Beacon command for locating the adapter card |
1Hz blinking Yellow |
OFF |
|||||||||
Error |
4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:
|
ON |
|||||||||
Link Up |
In full port speed: the Green LED is solid In less than full port speed: the Yellow LED is solid |
ON |
|||||||||
Physical Activity |
The Green LED will blink. |
Blinking |
FRU EEPROM
FRU EEPROM allows the baseboard to identify different types of OCP 3.0 cards. FRU EEPROM is accessible through SMCLK and SMDATA. FRU EEPROM address is defined according to SLOT_ID0 and SLOT_ID1 and its capacity is 16KB.
SMBus Interface
ConnectX-7 technology maintains support for manageability through a BMC. ConnectX-7 OCP 3.0 adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard NVIDIA OCP 3.0 adapter. For configuring the adapter for the specific manageability solution in use by the server, please contact NVIDIA Support.
Voltage Regulators
The adapter card incorporates a CPLD device that implements the OCP 3.0 host scan chain and c ontrols the networking port logic LED (LED0) . It draws its power supply from the 3.3V_EDGE and 12V_EDGE rails.