On This Page
This is the User Guide for NVIDIA External Multi-Host Adapter Kit for OCP 3.0 with a connectivity of up to 4 standard servers.
Caution: Powering up the MiniSAS auxiliary cards before the OCP 3.0 multi-host board is powered up may cause not power-up the card.
NVIDIA Multi-Host technology enables connecting up to 4 compute / storage hosts to a single OCP 3.0 multi-host adapter. The deployment of multi-host platforms significantly reduces the overall number of data-center network connections, enabling great infrastructure efficiency and simplicity with CAPEX and OPEX cost savings. NVIDIA Multi-Host technology is built into ConnectX SmartNICs and BlueField DPUs and is ideal for high-performance, compute-intensive, data-center environments delivering cloud, web 2.0, and telecom services.
The externally connected multi-host solution leverages the same multi-host technology built into the network adapter ASIC. The external Mini-SAS harnesses make up the connectivity between each host and the network card. The solution is perfectly positioned for highly dense 4-node 2U chassis systems, enabling 25Gb/s connectivity per connected node, as illustrated in the figure below.
The externally connected Multi-host solution offers a superior price/performance ratio for both new and existing scale-out computing fabrics, while dramatically reducing the total cost of ownership on the following:
Network adapters, operating as a single network adapter, serve up to 4 nodes.
Network switch ports, operating as a single switch port, serve up to 4 nodes.
Active cabling, as a single cable now serves up to four nodes.
Rack space, power and cooling, attributed to the overall reduction in network connections in the data center.
The below figure is for illustration purposes only. The OCP 3.0 card is not included in the package.
Minimum of 120W external ATX power supply source (not included in the package contents)
Critical requirement: at least one external fan (not included in the package contents).
OpenFabrics Enterprise Distribution (OFED)
Prior to unpacking your product, it is important to make sure your system meets all the requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.
1x Multi-Host OCP 3.0 adapter
4x MiniSAS Auxilliary boards
x4 MiniSAS HD cables
USB type-A to Mini-USB type B cable
(a) The I/O panel is included in the package and can be used when inserting the product into a server. Currently, the option is not supported.
OCP 3.0 Multi-Host Kit
Network Connector Type
Supports any OCP 3.0 adapter card
Multi-Host Board: 6.69in x 6.69in (170mm x 170mm)
For more detailed information see Specifications.
This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.
For visualization scenarios, please contact NVIDIA support.
PCI Express (PCIe)
Uses the following PCIe interfaces:
NC-SI over RMII
A Network Controller Sideband Interface (NC-SI) is a combination of logical and physical paths that interconnect the Management Controller and Network Controller(s) for the purpose of transferring management communication traffic among them. NC-SI includes commands and associated responses which the Management Controller uses to control the status and operation of the Network Controller(s). NC-SI also includes a mechanism for transporting management traffic and asynchronous notifications. Please connect an ethernet cable from the management console to J104 of the OCP 3.0 host adapter.
Wake-on-LAN (WoL) is a feature that allows a network professional to remotely power on a server (4 servers together) or be awakenedby a network message. The feature is applicable only when the OCP 3.0 card supports the feature.
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. OCP Multi-Host card effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.
RDMA and RDMA over Converged Ethernet (RoCE)
OCP Multi-Host card, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high performance over Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities, as well as OCP Multi-Host card, advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. OCP Multi-Host card advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.
Adapter functionality enabling reduced CPU overhead allowing more available CPU for computation tasks.
Open VSwitch (OVS) offload using ASAP2
• Flexible match-action flow tables
Quality of Service (QoS)
Support for port-based Quality of Service enabling various application requirements for latency and SLA.
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage InfiniBand RDMA for high-performance storage access.
OCP Multi-Host card SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.
• Tag Matching and Rendezvous Offloads