About This Manual
This is the User Guide for NVIDIA® Ethernet adapter cards based on the ConnectX®-5 integrated circuit device for Open Compute Project Spec 3.0. These adapters connectivity provide the highest performing low latency and most flexible interconnect solution for PCI Express Gen 3.0/4.0 servers used in Enterprise Data Centers and High-Performance Computing environments.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-5 Ex and ConnectX-5 Ethernet adapter cards for OCP Spec 3.0.
IC in Use | OPN | Marketing Description |
---|---|---|
ConnectX®-5 | MCX562A-ACAI | ConnectX®-5 EN network interface card for OCP 3.0, with host management, 25GbE Dual-port SFP28, PCIe3.0 x16, Internal Lock bracket |
MCX562A-ACAB | ConnectX®-5 EN network interface card for OCP 3.0, with host management, 25GbE Dual-port SFP28, PCIe3.0 x16, Thumbscrew (Pull Tab) bracket | |
MCX566A-CCAI | ConnectX®-5 EN network interface card for OCP 3.0, with host management, 100GbE Dual-port QSFP28, PCIe3.0 x16, Internal Lock bracket | |
MCX565A-CCAB | ConnectX®-5 EN network interface card for OCP 3.0, with host management, 100GbE Single-port QSFP28, PCIe3.0 x16, Thumbscrew (Pull Tab) bracket | |
ConnectX®-5 Ex | MCX566M-GDAI | ConnectX®-5 Ex EN network interface card for OCP 3.0 with Multi-Host or Socket Direct and host management, 50GbE Dual-port QSFP28, PCIe 4.0/3.0 x16, Internal Lock bracket |
MCX565M-CDAI | ConnectX®-5 Ex EN network interface card for OCP 3.0, with Multi-Host or Socket Direct and host management, 100GbE Single-port QSFP28, PCIe4.0 x16, Internal Lock bracket | |
MCX565M-CDAB | ConnectX®-5 Ex EN network interface card for OCP 3.0, with Multi-Host or Socket Direct and host management, 100GbE Single-port QSFP28, PCIe4.0 x16, Thumbscrew (Pull Tab) bracket | |
MCX566A-CDAI | ConnectX®-5 Ex EN network interface card for OCP 3.0 with host management, 100GbE Dual-port QSFP28, PCIe4.0 x16, Internal Lock bracket | |
MCX566A-CDAB | ConnectX®-5 Ex EN network interface card for OCP 3.0, with host management, 100GbE Dual-port QSFP28, PCIe4.0 x16, Thumbscrew (Pull Tab) bracket |
Intended Audience
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with Ethernet network and architecture specifications.
Technical Support
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods:
E-mail: Networking-support@nvidia.com
Tel: +1 408.916.0055; Toll-free (USA only) 86-Mellanox (8663552669)
Customers who purchased NVIDIA Global Support Services, please see your contract for details regarding Technical Support.
Customers who purchased NVIDIA products through an NVIDIA-approved reseller should first seek assistance through their reseller.
Related Documentation
NVIDIA OFED for Linux User Manual and Release Notes | User Manual describing OFED features, performance, band diagnostic, tools content and configuration. See NVIDIA OFED for Linux Documentation. |
---|---|
WinOF-2 for Windows User Manual and Release Notes | User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content and configuration. See WinOF-2 for Windows Documentation. |
NVIDIA VMware for Ethernet User Manual and Release Notes | User Manual describing the various components of the NVIDIA ConnectX® NATIVE ESXi stack. See http://www.nvidia.com Products > Software > Ethernet Drivers > VMware Driver |
NVIDIA Firmware Utility (mlxup) User Manual and Release Notes | NVIDIA firmware update and query utility used to update the firmware. See http://www.nvidia.com > Products > Software > Firmware Tools >mlxup Firmware Utility |
NVIDIA Firmware Tools (MFT) User Manual | User Manual describing the set of MFT firmware management tools for a single node. See MFT User Manual. |
IEEE Std 802.3 Specification | IEEE Ethernet specification at http://standards.ieee.org/ |
PCI Express Specifications | Industry Standard PCI Express Base and Card Electromechanical Specifications at https://pcisig.com/specifications |
Open Compute Project 3.0 Specification | https://www.opencompute.org/ |
NVIDIA LinkX Interconnect Solutions | NVIDIA LinkX InfiniBand cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. NVIDIA offers one of industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s) and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards, NVIDIA tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15. Read more at https://www.nvidia.com/products/interconnect/infiniband-overview.php |
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. In this document PCIe is used to mean PCI Express.