Introduction
The NVIDIA® ConnectX®-8 SuperNIC™ is optimized to supercharge hyperscale AI computing workloads. With support for both InfiniBand and Ethernet networking at up to 800 gigabits per second (Gb/s), ConnectX-8 SuperNIC delivers high-speed, efficient network connectivity, significantly enhancing system performance for AI factories and cloud data center environments. With support for both InfiniBand and Ethernet networking at up to 800 gigabits per second (Gb/s), ConnectX-8 SuperNIC delivers high-speed, efficient network connectivity, significantly enhancing system performance for AI factories and cloud data center environments.
ConnectX-8 SuperNICs are offered in two form factors and various flavors: stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards. This user manual covers the OCP 3.0 cards, for the low-profile PCIe stand-up cards hardware user manual, please refer to ConnectX-8 SuperNIC User Manual .
Item | Description |
PCI Express Slot | In PCIe x16 Configuration PCIe Gen6 @ 64GT/s through x16 edge connector |
System Power Supply | Refer to Specifications |
Operating System |
|
Connectivity |
|
Category | Qty | Item |
Cards | 1 | ConnectX-8 SuperNIC for OCP 3.0 |
Accessories | 1 | Thumbscrew (Pull Tab) or Internal Lock Bracket |
Make sure to use a PCIe slot capable of supplying the required power and airflow to the ConnectX-8 SuperNICs as stated in the Specifications chapter.
This section describes hardware features and capabilities. Please refer to the relevant driver and firmware release notes for feature availability.
PCI Express (PCIe) |
| |||||||||||||||||||||||||||
InfiniBand Architecture Specification v1.7 compliant | ConnectX-8 delivers low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data center applications. ConnectX-8 is InfiniBand Architecture Specification v1.7 compliant. InfiniBand Network Protocols and Rates
| |||||||||||||||||||||||||||
Up to 400 Gigabit Ethernet | ConnectX-8 SuperNICs comply with the following IEEE 802.3 standards: 400GbE / 200GbE / 100GbE / 25GbE / 10GbE
| |||||||||||||||||||||||||||
Memory Components |
| |||||||||||||||||||||||||||
Overlay Networks | In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-8 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol. | |||||||||||||||||||||||||||
Quality of Service (QoS) | Support for port-based Quality of Service enabling various application requirements for latency and SLA. | |||||||||||||||||||||||||||
Hardware-based I/O Virtualization | ConnectX-8 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. | |||||||||||||||||||||||||||
SR-IOV | ConnectX-8 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. | |||||||||||||||||||||||||||
High-Performance Accelerations |
| |||||||||||||||||||||||||||
Secure Boot | The secure boot process assures the booting of authentic firmware/software that is intended to run on ConnectX-8. This is achieved using cryptographic primitives using asymmetric cryptography. ConnectX-8 supports several cryptographic functions in its HW Root-of-Trust (RoT) that has its key stored in on-chip FUSES. | |||||||||||||||||||||||||||
Secure Firmware Update | The Secure firmware update feature enables a device to verify digital signatures of new firmware binaries to ensure that only officially approved versions can be installed from the host, the network, or a Board Management Controller (BMC). The firmware of devices with “secure firmware update” functionality (secure FW), restricts access to specific commands and registers that can be used to modify the firmware binary image on the flash, as well as commands that can jeopardize security in general. For further information, refer to the MFT User Manual. | |||||||||||||||||||||||||||
Host Management | ConnectX-8 technology maintains support for host manageability through a BMC. ConnectX-8 PCIe stand-up adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard NVIDIA PCIe stand-up SuperNIC. For configuring the adapter for the specific manageability solution in use by the server, please contact NVIDIA Support.
| |||||||||||||||||||||||||||
RDMA and RDMA over Converged Ethernet (RoCE) | ConnectX-8, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low latency and high-performance over InfiniBand and Ethernet networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-8 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. | |||||||||||||||||||||||||||
NVIDIA PeerDirect™ | PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-8 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. | |||||||||||||||||||||||||||
CPU Offload | Adapter functionality enables reduced CPU overhead allowing more available CPU for computation tasks.
| |||||||||||||||||||||||||||
Cryptography Accelerations | ConnectX-8 supports IPSec, MACSec, and PSP cryptography acceleration. Connectx-8 SuperNIC hardware-based accelerations offload the crypto operations and free up the CPU, reducing latency and enabling scalable crypto solutions. | |||||||||||||||||||||||||||
NVIDIA Multi-Host | NVIDIA® Multi-Host technology enables next-generation Cloud, Web 2.0 and high-performance data centers to design and build new scale-out heterogeneous compute and storage racks with direct connectivity between multiple hosts and the centralized network controller. This enables direct data access with the lowest latency to significantly improve densities and maximizes data transfer rates. For more information, please visit NVIDIA Multi-Host Solutions. |