NVIDIA ConnectX-5 InfiniBand/Ethernet Adapter Cards User Manual
NVIDIA ConnectX-5 InfiniBand/Ethernet Adapter Cards User Manual


This is the User Guide for InfiniBand/Ethernet adapter cards based on the ConnectX®-5 integrated circuit device. These adapters connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen 3.0/4.0 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments.

The following table provides the ordering part number, port speed, number of ports, and PCI Express speed. Each adapter comes with two bracket heights - short and tall.

ConnectX-5 Ex InfiniBand/Ethernet Adapter Cards


ConnectX-5 Ex InfiniBand/Ethernet Adapter Cards

Part Number


Data Rate

Ethernet: 10/25/40/50/100 Gb/s

Network Connector Type

Dual-port QSFP28

PCI Express Connectors

PCIe Gen 3.0/4.0 x16 (a), (b)
SERDES @ 8.0GT/s / 16.0GT/s


2.71 in. x 5.6 in. (68.90mm x 142.24 mm) – low profile


RoHS Compliant

Adapter IC Part Number


Device ID (decimal)

4121 for Physical Function (PF) and 4122 for Virtual Function (VF)

a. PCIe 4.0 x16 bus can supply a maximum bandwidth of 256Gb/s (=16 *16GT/s, including overhead), and can support 200Gb/s when both network ports of the card run at 100Gb/s.
b. This card has been tested and certified with PCIe 3.0 servers. PCIe 4.0 interface will be tested when servers with Gen 4.0 support become available.

ConnectX-5 InfiniBand/InfiniBand/Ethernet Adapter Cards


ConnectX-5 InfiniBand/Ethernet Adapter Cards

Part Number




Data Rate

Ethernet: 10/25/40/50/100 Gb/s

Network Connector Type

Single-port QSFP28

Dual-port QSFP28

Dual-port QSFP28

PCI Express Connectors

PCIe Gen 3.0 x16; SerDes @ 8.0GT/s


2.71 in. x 5.6 in. (68.90mm x 142.24 mm) – low profile


RoHS Compliant

Adapter IC Part Number






Device ID (decimal)

4119 for Physical Function (PF) and 4120 for Virtual Function (VF)

For more detailed information see Specifications.


This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.



PCI Express (PCIe)

Uses PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x16 edge connector. Gen 1.1 and 2.0 compatible.

EDR InfiniBand

A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 25.78125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of 100Gb/s.

100Gb/s InfiniBand/Ethernet Adapter

ConnectX-5 offers the highest throughput InfiniBand/Ethernet adapter, supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack.

InfiniBand Architecture Specification v1.3 compliant

ConnectX-5 delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. ConnectX-5 is InfiniBand Architecture Specification v1.3 compliant.

Up to 100 Gigabit Ethernet

NVIDIA adapters comply with the following IEEE 802.3 standards:

  • 100GbE/ 50GbE / 40GbE / 25GbE / 10GbE / 1GbE

  • IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet

  • IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes

  • IEEE 802.3ba 40 Gigabit Ethernet

  • IEEE 802.3ae 10 Gigabit Ethernet

  • IEEE 802.3ap based auto-negotiation and KR startup

  • Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)

  • IEEE 802.3ad, 802.1AX Link Aggregation

  • IEEE 802.1Q, 802.1P VLAN tags and priority

  • IEEE 802.1Qau (QCN)

  • Congestion Notification

  • IEEE 802.1Qaz (ETS)

  • IEEE 802.1Qbb (PFC)

  • IEEE 802.1Qbg

  • IEEE 1588v2

  • Jumbo frame support (9.6KB)


  • SPI Quad - includes 128Mbit SPI Quad Flash device (W25Q128FVSIG device by ST Microelectronics).

  • FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is 128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus. Note: Address 0x58 is reserved.)

Overlay Networks

In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-5 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

RDMA and RDMA over Converged Ethernet (RoCE)

ConnectX-5, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high performance over Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-5 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

NVIDIA PeerDirect™

PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-5 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

CPU Offload

Adapter functionality enabling reduced CPU overhead allowing more available CPU for computation tasks.

Open VSwitch (OVS) offload using ASAP2

• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation

Quality of Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

Hardware-based I/O Virtualization

ConnectX-5 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server.

Storage Acceleration

A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage InfiniBand RDMA for high-performance storage access.

  • NVMe over Fabric offloads for the target machine

  • Erasure Coding

  • T10-DIF Signature Handover


ConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

High-Performance Accelerations

  • Tag Matching and Rendezvous Offloads

  • Adaptive Routing on Reliable Transport

  • Burst Buffer Offloads for Background Checkpointing


UEFI is a standard firmware interface designed to replace BIOS. NVIDIA UEFI Network driver allows boot over network via PXE (Preboot eXecution Environment). This network driver allows remote boot over InfiniBand or Ethernet, or Boot over iSCSI (Bo-iSCSI) in UEFI mode, and also supports the SecureBoot standard. The UEFI Network driver allows IT managers the flexibility to deploy servers with a single adapter card into InfiniBand or Ethernet networks while also enabling booting from LAN or remote storage targets. In addition to boot capabilities, NVIDIA UEFI Network driver provides firmware management and diagnostic protocols compliant with the UEFI specification.

For further information, refer to the NVIDIA PreBoot Drivers User Manual.
Supported in MCX556A-ECUT.

  • RHEL/CentOS

  • Windows

  • FreeBSD

  • VMware

  • OpenFabrics Enterprise Distribution (OFED)

  • OpenFabrics Windows Distribution (WinOF-2)

  • Interoperable with 1/10/25/40/50/100 Gb/s Ethernet switches

  • Passive copper cable with ESD protection

  • Powered connectors for optical and active cable support

© Copyright 2023, NVIDIA. Last updated on May 22, 2023.