About This Manual
This User Manual describes NVIDIA® ConnectX®-6 InfiniBand/Ethernet adapter cards for Open Compute Project (OCP), Spec 3.0. It provides details as to the interfaces of the board, specifications, required software and firmware for operating the board, and relevant documentation.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-6 InfiniBand/Ethernet adapter cards for OCP Spec 3.0.
NVIDIA SKU |
Legacy OPN |
Marketing Description |
Lifecycle |
900-9X657-0058-SB0 |
MCX653436A-HDAB |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Dual-port QSFP56, PCIe4.0 x16, Thumbscrew Bracket |
Mass Production |
EOL'ed (End of Life) Ordering Part Number
NVIDIA SKU |
Legacy OPN |
Marketing Description |
Lifecycle |
NVIDIA SKU |
Legacy OPN |
Marketing Description |
900-9X657-0058-SB0 |
MCX653436A-HDAB |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Dual-port QSFP56, PCIe4.0 x16, Thumbscrew Bracket |
Mass Production | |||
900-9X657-0018-MI0 |
MCX653435M-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0058-SI2 |
MCX653436A-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Dual-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0018-SI0 |
MCX653435A-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0016-SI0 |
MCX653435A-EDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 100Gb/s (HDR100, EDR IB and 100GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe 3.0/4.0 x16, Internal Lock | ||||
900-9X657-0018-SE0 |
MCX653435A-HDAE |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Ejector Latch |
Intended Audience
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with InfiniBand/Ethernet network and architecture specifications.
Technical Support
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods:
URL: https://www.nvidia.com > Support
E-mail: enterprisesupport@nvidia.com
Customers who purchased NVIDIA Global Support Services, please see your contract for details regarding Technical Support.
Customers who purchased NVIDIA products through an NVIDIA-approved reseller should first seek assistance through their reseller.
Related Documentation
NVIDIA SKU |
Legacy OPN |
Marketing Description |
Lifecycle |
NVIDIA SKU |
Legacy OPN |
Marketing Description |
900-9X657-0058-SB0 |
MCX653436A-HDAB |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Dual-port QSFP56, PCIe4.0 x16, Thumbscrew Bracket |
Mass Production | |||
900-9X657-0018-MI0 |
MCX653435M-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0058-SI2 |
MCX653436A-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Dual-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0018-SI0 |
MCX653435A-HDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Internal Lock | ||||
900-9X657-0016-SI0 |
MCX653435A-EDAI |
ConnectX®-6 InfiniBand/Ethernet adapter card, 100Gb/s (HDR100, EDR IB and 100GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe 3.0/4.0 x16, Internal Lock | ||||
900-9X657-0018-SE0 |
MCX653435A-HDAE |
ConnectX®-6 InfiniBand/Ethernet adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host management, Single-port QSFP56, PCIe4.0 x16, Ejector Latch | ||||
MLNX_OFED for Linux User Manual and Release Notes |
User Manual describing OFED features, performance, band diagnostic, tools content, and configuration. See MLNX_OFED for Linux Documentation. | |||||
WinOF-2 for Windows User Manual and Release Notes |
User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content, and configuration. See WinOF-2 for Windows Documentation. | |||||
NVIDIA VMware for Ethernet User Manual |
User Manual and release notes describing the various components of the NVIDIA ConnectX® NATIVE ESXi stack. See VMware® ESXi Drivers Documentation. | |||||
NVIDIA Firmware Utility (mlxup) User Manual and Release Notes |
NVIDIA firmware update and query utility used to update the firmware. Refer to Firmware Utility (mlxup) Documentation. | |||||
NVIDIA Firmware Tools (MFT) User Manual |
User Manual describing the set of MFT firmware management tools for a single node. See MFT User Manual. | |||||
InfiniBand Architecture Specification Release 1.2.1, Vol 2 - Release 1.3 | ||||||
IEEE Std 802.3 Specification | ||||||
PCI Express Specifications |
Industry Standard PCI Express Base and Card Electromechanical Specifications. Refer to PCI-SIG Specifications. | |||||
LinkX Interconnect Solutions |
LinkX InfiniBand cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. NVIDIA offers one of the industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s), HDR (200Gb/s) and NDR (400Gb/s) cables, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards, NVIDIA tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15 . Read more at LinkX Cables and Transceivers. | |||||
Open Compute Project 3.0 Specifications |
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. In this document PCIe is used to mean PCI Express.