About This Manual
This User Manual describes NVIDIA® Mellanox® ConnectX®-5 VPI Socket Direct adapter card supporting Dual-Socket Servers. The kit includes an adapter card with dual QSFP28 ports with PCI Express x8 edge connector, an auxiliary PCIe connection card with PCI Express x8 edge connector and a Slim-Line SAS cable which connects both cards. The User Manual provides details as to the interfaces of the adapter card, specifications, required software and firmware for operating the adapter card, and relevant documentation
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-5 VPI Socket Direct adapter cards.
|MCX556M-ECAT-S25||ConnectX®-5 VPI adapter card with Socket Direct supporting dual-socket server, EDR IB (100Gb/s) and 100GbE, dual-port QSFP28, 2x PCIe3.0 x8, 25cm harness, tall bracket|
|MCX556M-ECAT-S35A||ConnectX®-5 VPI adapter card with Socket Direct supporting dual-socket server, EDR IB (100Gb/s) and 100GbE, dual-port QSFP28, 2x PCIe3.0 x8, 35cm harness, active auxiliary PCIe connection card, tall bracket|
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with InfiniBand and Ethernet network and architecture specifications.
Customers who purchased Mellanox products directly from Mellanox are invited to contact us through the following methods:
Customers who purchased Mellanox M-1 Global Support Services, please see your contract for details regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first seek assistance through their reseller.
|Mellanox OFED for Linux User Manual and Release Notes||User Manual describing OFED features, performance, band diagnostic, tools content and configuration. See Mellanox OFED for Linux Documentation.|
|WinOF-2 for Windows User Manual and Release Notes||User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content and configuration. See WinOF-2 for Windows Documentation.|
|Mellanox VMware for Ethernet User Manual and Release Notes||User Manual describing the various components of the Mellanox ConnectX® NATIVE ESXi stack. See http://www.mellanox.com Products > Software > Ethernet Drivers > VMware Driver|
|Mellanox Firmware Utility (mlxup) User Manual and Release Notes||Mellanox firmware update and query utility used to update the firmware. See http://www.mellanox.com > Products > Software > Firmware Tools >mlxup Firmware Utility|
|Mellanox Firmware Tools (MFT) User Manual|
User Manual describing the set of MFT firmware management tools for a single node. See MFT User Manual.
|IEEE Std 802.3 Specification||IEEE Ethernet specification at http://standards.ieee.org/|
|PCI Express Specifications||Industry Standard PCI Express Base and Card Electromechanical Specifications at https://pcisig.com/specifications|
|Mellanox LinkX Interconnect Solutions|
Mellanox LinkX InfiniBand cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. Mellanox offers one of industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s) and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards, Mellanox tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15. Read more at https://www.mellanox.com/products/interconnect/infiniband-overview.php
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. IB is used in this document to mean InfiniBand. PCIe is used to mean PCI Express.