About This Manual
This User Manual describes NVIDIA® ConnectX®-5 and ConnectX®-5 Ex VPI adapter cards for Open Compute Project (OCP), Spec 2.0. It provides details as to the interfaces of the board, specifications, required software and firmware for operating the board, and relevant documentation.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-5 VPI adapter cards for OCP Spec 2.0.
|IC Model||OPN||Marketing Description|
|ConnectX-5||MCX545A-ECAN||ConnectX®-5 VPI network interface card for OCP , with host management, EDR IB (100Gb/s) and 100GbE, single-port QSFP28, PCIe Gen3.0 x16, no bracket|
|MCX545B-ECAN||ConnectX®-5 VPI network interface card for OCP, with host management, EDR IB (100Gb/s) and 100GbE, single-port QSFP28, PCIe Gen3.0 x16, no bracket, Type-1 Heat Sink|
|MCX545M-ECAN||ConnectX®-5 VPI network interface card for OCP with Multi-Host, with host management, EDR IB (100Gb/s) and 100GbE, single-port QSFP28, PCIe Gen3.0 x16, no bracket|
|ConnectX-5 Ex||MCX546A-EDAN||ConnectX®-5 Ex VPI network interface card for OCP, with host management, EDR IB (100Gb/s) and 100GbE dual-port belly-to-belly QSFP28, PCIe Gen4.0 x16, no bracket|
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with Ethernet network and architecture specifications.
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods:
Customers who purchased NVIDIA Global Support Services, please see your contract for details regarding Technical Support.
Customers who purchased NVIDIA products through an NVIDIA-approved reseller should first seek assistance through their reseller.
|MLNX_OFED for Linux User Manual and Release Notes||User Manual describing OFED features, performance, band diagnostic, tools content, and configuration. See MLNX_OFED for Linux Documentation.|
|WinOF-2 for Windows User Manual and Release Notes||User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content, and configuration. See WinOF-2 for Windows Documentation.|
|NVIDIA VMware for Ethernet User Manual||User Manual and release notes describing the various components of the NVIDIA ConnectX® NATIVE ESXi stack. See VMware® ESXi Drivers Documentation.|
|NVIDIA Firmware Utility (mlxup) User Manual and Release Notes||NVIDIA firmware update and query utility used to update the firmware. Refer to Firmware Utility (mlxup) Documentation.|
|NVIDIA Firmware Tools (MFT) User Manual|
User Manual describing the set of MFT firmware management tools for a single node. See MFT User Manual.
|InfiniBand Architecture Specification Release 1.2.1, Vol 2 - Release 1.3||InfiniBand Specifications|
|IEEE Std 802.3 Specification||IEEE Ethernet Specifications|
|PCI Express Specifications||Industry Standard PCI Express Base and Card Electromechanical Specifications. Refer to PCI-SIG Specifications.|
|Open Compute Project 2.0 Specification||https://www.opencompute.org/|
|LinkX Interconnect Solutions|
NVIDIA LinkX InfiniBand cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. NVIDIA offers one of industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s) and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards, NVIDIA tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15. Read more at LinkX Cables and Transceivers.
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. In this document PCIe is used to mean PCI Express.