NVIDIA ConnectX-7 Cards for OCP Spec 3.0 User Manual
NVIDIA ConnectX-7 Cards for OCP Spec 3.0 User Manual

Introduction

This is the User Guide for InfiniBand/VPI adapter cards based on the ConnectX-7 integrated circuit device for OCP Spec 3.0. These adapters' connectivity provides the highest-performing and most flexible interconnect solution for servers supporting OCP 3.0 used in Enterprise Data Centers and High-Performance Computing environments. ConnectX-7 network adapters are offered in two form factors and various flavors: stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards. This user manual covers the OCP 3.0 cards, for the low-profile PCIe stand-up cards hardware user manual, please refer to ConnectX-7 PCIe Stand-up Cards User Manual.

Providing up to two ports of NDR200 and 200GbE or a single port of NDR and 400GbE connectivity, and PCIe Gen 5.0 x16 host connectivity, ConnectX-7 is a member of NVIDIA’s world-class, award-winning, ConnectX family of network adapters. Continuing NVIDIA’s consistent innovation in networking, ConnectX-7 provides agility and efficiency at every scale. ConnectX-7 delivers cutting-edge performance and security for uncompromising data centers.

In addition to the Small Form Factor (SFF) form factor, ConnectX-7 for OCP 3.0 cards are available in the newly added Tall-SFF (TSFF) spec form factor, taking into account the added height of the card to allow better thermal performance. Contact NVIDIA for more details.

Item

Description

PCI Express slot

In PCIe x16 Configuration

PCIe Gen 5.0/4.0 (16 GT/s / 32GT/s) through x16 edge connector.

System Power Supply

Refer to Specifications

Operating System

  • In-box drivers for major operating systems:

    • Linux: RHEL, Ubuntu

    • Windows

  • Virtualization and containers

    • VMware ESXi (SR-IOV)

    • Kubernetes

  • OpenFabrics Enterprise Distribution (OFED)

  • OpenFabrics Windows Distribution (WinOF-2)

Connectivity

  • Interoperable with 1/10/25/40/50/100/200/400 Gb/s Ethernet switches and SDR/DDR/EDR/HDR100/HDR/NDR200/NDR InfiniBand switches

  • Passive copper cable with ESD protection

  • Powered connectors for optical and active cable support

Category

Qty

Item

Cards

1

ConnectX-7 adapter card for OCP 3.0

Accessories

1

Thumbscrew (Pull Tab) Bracket

For more detailed information, see Specifications.

Warning

This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.

Feature

Description

InfiniBand Architecture Specification v1.5 compliant

ConnectX-7 delivers low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data center applications. ConnectX-7 is InfiniBand Architecture Specification v1.5 compliant.

InfiniBand Network Protocols and Rates:

Protocol

Standard

Rate (Gb/s)

Comments

4x Port

(4 Lanes)

2x Ports

(2 Lanes)

NDR/NDR200

IBTA Vol2 1.5

425

212.5

PAM4 256b/257b encoding and RS-FEC

HDR/HDR100

IBTA Vol2 1.4

212.5

106.25

PAM4 256b/257b encoding and RS-FEC

EDR

IBTA Vol2 1.3.1

103.125

51.5625

NRZ 64b/66b encoding

FDR

IBTA Vol2 1.2

56.25

N/A

NRZ 64b/66b encoding

Up to 400Gb/s Ethernet

ConnectX-7 adapter cards comply with the following IEEE 802.3 standards:

400GbE / 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE

Protocol

MAC Rate

IEEE802.3ck

100/200/400Gb/s Gigabit Ethernet

(Include ETC enhancement)

IEEE802.3cd

IEEE802.3bs

IEEE802.3cm

IEEE802.3cn

IEEE802.3cu

50/100/200/400Gb/s Gigabit Ethernet

(Include ETC enhancement)

IEEE 802.3bj

IEEE 802.3bm

100 Gigabit Ethernet

IEEE 802.3by

Ethernet Consortium25

25/50 Gigabit Ethernet

IEEE 802.3ba

40 Gigabit Ethernet

IEEE 802.3ae

10 Gigabit Ethernet

IEEE 802.3cb

2.5/5 Gigabit Ethernet

(For 2.5: support only 2.5 x1000BASE-X)

IEEE 802.3ap

Based on auto-negotiation and KR startup

IEEE 802.3ad

IEEE 802.1AX

Link Aggregation

IEEE 802.1Q

IEEE 802.1P VLAN tags and priority

IEEE 802.1Qau (QCN)

Congestion Notification

IEEE 802.1Qaz (ETS)

EEE 802.1Qbb (PFC)

IEEE 802.1Qbg

IEEE 1588v2

IEEE 802.1AE (MACSec)

Jumbo frame support (9.6KB)

Memory Components

  • SPI - includes 256Mbit SPI Quad Flash device.

  • FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is 128Kbit (or 16KB). FRU I2C address is (0x50) and is accessible through the PCIe SMBus. (Note: A ddress 0x58 is reserved.)

Overlay Networks

In order to better scale their networks, datacenter operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-7 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

Quality of Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

Hardware-based I/O Virtualization

ConnectX-7 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server.

Storage Acceleration

A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage:

  • RDMA for high-performance storage access

  • NVMe over Fabric offloads for the target machine

  • NVMe over TCP acceleration

SR-IOV

ConnectX-7 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

High-Performance Accelerations

  • Collective operations offloads

  • Vector collective operations offloads

  • MPI tag matching

  • MPI_Alltoall offloads

  • Rendezvous protocol offload

RDMA Message Rate

330-370 million messages per second.

Secure Boot

The secure boot process assures booting of authentic firmware/software that is intended to run on ConnectX-7. This is achieved using cryptographic primitives using asymmetric cryptography. ConnectX-7 supports several cryptographic functions in its HW Root-of-Trust (RoT) that has its key stored in on-chip FUSES.

Secure Firmware Update

The Secure firmware update feature enables a device to verify digital signatures of new firmware binaries to ensure that only officially approved versions can be installed from the host, the network, or a Board Management Controller (BMC). The firmware of devices with “secure firmware update” functionality (secure FW), restricts access to specific commands and registers that can be used to modify the firmware binary image on the flash, as well as commands that can jeopardize security in general.

For further information, refer to the MFT User Manual.

Advanced storage capabilities

Block-level encryption and checksum offloads.

Host Management

ConnectX-7 technology maintains support for host manageability through a BMC. ConnectX-7 PCIe stand-up adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard NVIDIA PCIe stand-up adapter card. For configuring the adapter for the specific manageability solution in use by the server, please contact NVIDIA Support.

  • Protocols: PLDM, NCSI

  • Transport layer – RBT, MCTP over SMBus and MCTP over PCIe

  • Physical layer: SMBus 2.0 / I2C interface for device control and configuration, PCIe

  • PLDM for Monitor and Control DSP0248

  • PLDM for Firmware Update DSP026

  • IEEE 1149.6

  • Secured FW update

  • FW Recovery

  • NIC reset

  • Monitoring and control

  • Network port settings

  • Boot setting

RDMA and RDMA over Converged Ethernet (RoCE)

ConnectX-7, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-7 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

NVIDIA PeerDirect™

PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-7 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

CPU Offload

Adapter functionality enables reduced CPU overhead allowing more available CPU for computation tasks.

  • Flexible match-action flow tables

  • Open VSwitch (OVS) offload using ASAP2®

  • Tunneling encapsulation/decapsulation

NVIDIA Multi-Host

NVIDIA® Multi-Host technology enables next-generation Cloud, Web 2.0 and high-performance data centers to design and build new scale-out heterogeneous compute and storage racks with direct connectivity between multiple hosts and the centralized network controller. This enables direct data access with the lowest latency to significantly improve densities and maximizes data transfer rates. For more information, please visit NVIDIA Multi-Host Solutions.

© Copyright 2023, NVIDIA. Last updated on Dec 5, 2023.