Created on Jun 4, 2019
Updated on Sep 13, 2021
Introduction
This is a How To that shows how to install and test the NVIDIA ConnectX-5/ConnectX-6 NATIVE ESXi Driver for VMware vSphere ESXi 6.7/7.0 on single host, and perform basic initial configuration steps to enable the driver by using ESXi cli interface.
References
- NVIDIA OFED (MLNX_OFED) ESXi Ethernet Driver
How-to: Firmware update for NVIDIA ConnectX-5/6 adapter on VMware ESXi 6.5 and above.
- VMware vSphere Documentation
- vSphere Command-Line Interface Concepts and Examples
Hardware and Software Requirements
1. A server platform with an adapter card based on one of the following NVIDIA ConnectX®-4/5 PCI Express Adapter Cards (Ethernet, VPI) devices.
2. Installer Privileges: The installation requires administrator privileges on the target machine.
3. Device ID: For the latest list of device IDs, please visit NVIDIA website.
4. Supported NICs / Firmware: Recommended firmware versions you can find here.
Driver Installation (CLI)
1. Enable SSH Access to ESXi server.
2. Log into ESXi vSphere Command-Line Interface with root permissions.
3. Verify that the host is equipped with NVIDIA ConnectX (Mellanox) adapter.
~ lspci | grep Mellanox 0000:39:00.0 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic0] 0000:39:00.1 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic1]
4. Verify the driver's version was installed.
ESXi 7.0
~ esxcli software vib list | grep nmlx nmlx5-core 4.21.71.1-1OEM.702.0.0.17473468 MEL VMwareCertified 2021-06-07 nmlx5-rdma 4.21.71.1-1OEM.702.0.0.17473468 MEL VMwareCertified 2021-06-07
ESXi 6.7
~ esxcli software vib list | grep nmlx nmlx5-core 4.17.9.12-1vmw.670.0.0.8169922 VMW VMwareCertified 2018-04-25 nmlx5-rdma 4.17.9.12-1vmw.670.0.0.8169922 VMW VMwareCertified 2018-04-25
5. Download a latest Mellanox native ESXi drivers from here.
6. Unzip the binary image (.zip file).
7. Use SCP or any other file transfer method to copy the driver to the required ESXi host.
8. Place a Host in Maintenance Mode the ESXi host.
9. Install the driver.
~ esxcli software vib install –d <path>/<bundle_file>
ESXi 7.0
~ esxcli software vib install -d /tmp/Mellanox-nmlx5_4.21.71.101-1OEM.702.0.0.17630552.zip
ESXi 6.7
~ esxcli software vib install -d /tmp/Mellanox-nmlx5_4.17.71.1-1OEM.670.0.0.8169922.zip
10. Reboot the server.
~ reboot
11. Verify the driver modules was installed and loaded successfully.
esxcli software vib list | grep nmlx nmlx5-core 4.21.71.101-1OEM.702.0.0.17630552 MEL VMwareCertified 2021-09-30 nmlx5-rdma 4.21.71.101-1OEM.702.0.0.17630552 MEL VMwareCertified 2021-09-30
~ esxcli system module list | grep nmlx5 nmlx5_core true true nmlx5_rdma true true
12. Check physical network interface status.
~ esxcli network nic list Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description ------- ------------ ---------- ------------ ----------- ------ ------ ----------------- ---- ----------- vmnic0 0000:39:00.0 nmlx5_core Up Up 100000 Full 0c:42:a1:24:04:ea 1500 Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) vmnic1 0000:39:00.1 nmlx5_core Up Down 0 Half 0c:42:a1:24:04:eb 1500 Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA)
13. Exit Maintenance Mode the ESXi host.
Done!
Authors
Boris Kovalev Boris Kovalev has worked for the past several years as a Solutions Architect, focusing on NVIDIA Networking/Mellanox technology, and is responsible for complex machine learning, Big Data and advanced VMware-based cloud research and design. Boris previously spent more than 20 years as a senior consultant and solutions architect at multiple companies, most recently at VMware. He has written multiple reference designs covering VMware, machine learning, Kubernetes, and container solutions which are available at the Mellanox Documents website.
Related Documents