Create Content



Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Created on Jun 9, 2019

Updated on Sep 13, 2021

On This Page

Introduction

This post describes how to configure the NVIDIA ConnectX-5/6 driver with an SR-IOV (Ethernet) for ESXi 6.7/7.0 Native driver.

Info

Note: Setting up a VM is out of the scope of this post.

References

Single Root IO Virtualization (SR-IOV)

Single Root IO Virtualization (SR-IOV) is a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus. This technology enables multiple virtual instances of the device with separate resources. NVIDIA adapters are capable of exposing in ConnectX-4/ConnectX-5 adapter cards up to 128 virtual instances called Virtual Functions (VFs). These virtual functions can then be provisioned separately. Each VF can be seen as an addition device connected to the Physical Function. It shares the same resources with the Physical Function.

SR-IOV is commonly used in conjunction with an SR-IOV enabled hypervisor to provide virtual machines direct hardware access to network resources hence increasing its performance.

Overview

SR-IOV configuration includes the following steps to:

  1. Enable Virtualization (SR-IOV) in the BIOS (prerequisites).
  2. Enable SR-IOV in the firmware.
  3. Enable SR-IOV in the MLNX_OFED Driver.
  4. Map the Virtual Machine (VM) to the relevant port via SR-IOV.

Hardware and Software Requirements

1. A server platform with an adapter card based on one of the following NVIDIA HCA devices:

2. Installer Privileges: The installation requires administrator privileges on the target machine.

3. Device ID: For the latest list of device IDs, please visit NVIDIA website.

Prerequisites

To set up an SR-IOV environment, the following is required:

  1. Make sure that SR-IOV is enabled in the BIOS of the specific server. Each server has different BIOS configuration options for virtualization. See as sample HowTo Set Dell PowerEdge R730 BIOS parameters to support SR-IOV for BIOS configuration examples.
  2. Install NVIDIA Firmware Tools (MFT) on ESXi server, refer to How-to: Install NVIDIA Firmware Tools (MFT) on VMware ESXi 6.7/7.0.
  3. Make sure to have the latest nmlx5_core native driver on the Hypervisor.
    Refer to NVIDIA ConnectX® Ethernet Driver for VMware® ESXi Server and How-to: NVIDIA ConnectX driver upgrade on VMware ESXi 6.5 and above.
  4. Make sure to have the supported firmware version.
    Refer to NVIDIA ConnectX® Ethernet Driver for VMware® ESXi Server and 
  5. How-to: Firmware update for NVIDIA ConnectX-5/6 adapter on VMware ESXi 6.5 and above.

Setting Up SR-IOV

Enable SR-IOV in the BIOS

Info
Each server has different BIOS configuration options for virtualization.

The figures used in this section are for illustration purposes only.

For further information, please refer to the appropriate BIOS User Manual:

1. Enable "SR-IOV" in the system BIOS.

2. Enable "Intel Virtualization Technology".

Enable SR-IOV on the Firmware

1. Enable SSH Access to ESXi server.

2. Log into ESXi vSphere Command-Line Interface with root permissions.

3. Run MFT and check the status.

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# /opt/mellanox/bin/mst start
Module mst is already loaded


# /opt/mellanox/bin/mst status


MST devices: 
------------
mt4125_pciconf7

4. Query the status of the device.

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# /opt/mellanox/bin/mlxconfig -d mt4125_pciconf7 q

Device #1:
----------

Device type:    ConnectX6DX
Name:           MCX623106AC-CDA_Ax
Description:    ConnectX-6 Dx EN adapter card; 100GbE; Dual-port QSFP56; PCIe 4.0 x16; Crypto and Secure Boot
Device:         mt4125_pciconf7

Configurations:             Next Boot


...
NUM_OF_VFS					0
SRIOV_EN 					False(0)
...

5. Enable SR-IOV and set the desired number of Virtual Functions (VFs).

  • SRIOV_EN=1
  • NUM_OF_VFS=16 ; This is an example with eight VFs per port.
Code Block
languagetext
themeFadeToGrey
titleESXi Console
# /opt/mellanox/bin/mlxconfig -d mt4125_pciconf7 s SRIOV_EN=1 NUM_OF_VFS=16

Device #1:
----------

Device type:    ConnectX6DX
Name:           MCX623106AC-CDA_Ax
Description:    ConnectX-6 Dx EN adapter card; 100GbE; Dual-port QSFP56; PCIe 4.0 x16; Crypto and Secure Boot
Device:         mt4125_pciconf7

Configurations:                              Next Boot       New
         SRIOV_EN                            False(0)        True(1)
         NUM_OF_VFS                          0               16

 Apply new Configuration? (y/n) [n] : y
Applying... Done!
-I- Please reboot machine to load new configurations.
Info
Note: mlxconfig must be performed for each PCI device (adapter). In parallel, in the driver the configuration is per module, which means that it will be applicable for all adapters installed on the server.

6. Enter Maintenance Mode the ESXi host.

7. Reboot the server.

Info
Note: At this point, the VFs are not seen when using lspci. Only when SR-IOV is enabled on the driver will you be able to see them.
Code Block
languagetext
themeFadeToGrey
titleESXi Console
# lspci -d | grep Mellanox
0000:39:00.0 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic0]
0000:39:00.1 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic1]
...

8. Exit Maintenance Mode the ESXi host.

9. Check if SR-IOV is enabled in the firmware.

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# /opt/mellanox/bin/mlxconfig -d mt4125_pciconf7 q


Device #1:
----------

Device type:    ConnectX6DX
Name:           MCX623106AC-CDA_Ax
Description:    ConnectX-6 Dx EN adapter card; 100GbE; Dual-port QSFP56; PCIe 4.0 x16; Crypto and Secure Boot
Device:         mt4125_pciconf7


Configurations: Current

...

NUM_OF_VFS 16

SRIOV_EN True(1)

...

Enable SR-IOV on the Driver

1. Get the module parameter list as follows:

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# esxcli system module parameters list -m nmlx5_core

Name Type Value Description
...
max_vfs array of uint Number of PCI VFs to initialize
Values : Array of 'uint' of range 0-128, May be limited by device, 0 - disabled
Default: 0
...

2. Enable SR-IOV in the driver and set the max_vfs module parameter.

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# esxcli system module parameters set -m nmlx5_core -p "max_vfs=16,16"

Or, if you have configured pfc:

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08 max_vfs=16,16"
Note

Note 1: Allow at least one more VF to be configured on the firmware (num_of_vfs) than is configured on the driver. In our example we had eight VFs configured on the firmware while four is configured on the driver (max_vfs).

Note 2: mlxconfig must be performed for each PCI device (adapter). In parallel, in the driver the configuration is per module, which means that it will be applicable for all adapters installed on the server.

Note 3: Changing the number of VFs is persistent.

3. Enter Maintenance Mode the ESXi host.

4. Reboot the server.

5. Exit Maintenance Mode the ESXi host.

6. Check the PCI bus and verify that you see the VFs (with the same number of VFs on each port).

Code Block
languagetext
themeFadeToGrey
titleESXi Console
# lspci -d | grep Mellanox

0000:39:00.0 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic0]
0000:39:00.1 Ethernet controller: Mellanox Technologies ConnectX-6 Dx EN NIC; 100GbE; dual-port QSFP56; PCIe4.0 x16; (MCX623106AC-CDA) [vmnic1]
0000:39:00.2 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_0]
0000:39:00.3 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_1]
0000:39:00.4 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_2]
0000:39:00.5 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_3]
0000:39:00.6 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_4]
0000:39:00.7 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_5]
0000:39:01.0 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_6]
0000:39:01.1 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_7]
0000:39:01.2 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_8]
0000:39:01.3 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_9]
0000:39:01.4 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_10]
0000:39:01.5 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_11]
0000:39:01.6 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_12]
0000:39:01.7 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_13]
0000:39:02.0 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_14]
0000:39:02.1 Ethernet controller: Mellanox Technologies ConnectX Family nmlx5Gen Virtual Function [PF_0.57.0_VF_15]

At this point you can see 16 VFs and one Physical Function (PF).

Add Network Adapter to the VM in SR-IOV Mode

Note

Note 1: Make sure the VM version is Rel. 10 or above, and upgrade it if needed by accessing the Compatibility section (otherwise SR-IOV will not appear as an option in the network adapter selection).

Note 2: Before you start, power off the VM.

After you enable the Virtual Functions on the host, each of them becomes available as a PCI device.

To assign Virtual Function to a Virtual Machine in the vSphere Web Client:

1. Locate the Virtual Machine in the vSphere Web Client.

  1. Select a data center, folder, cluster, resource pool, or host and click the Related Objects tab.
  2. Click Virtual Machines and select the virtual machine from the list.

2. Power off the Virtual Machine.

3. Select the VM and Go to "Edit Settings".

4. Click on Add Network adapter.

5. Under Adapter Type select the SR-IOV passthrough connectivity option.

6. Check the Reserve all guest memory (All locked) checkbox.

I/O memory management unit (IOMMU) must reach all Virtual Machine memory so that the passthrough device can access the memory by using direct memory access (DMA).

7. Expand the New Network section and connect the Virtual Machine to the SRIOV net port group from the combo box at the bottom of the screen.

The virtual NIC does not use this port group for data traffic. The port group is used to extract the networking properties, for example VLAN tagging, to apply on the data traffic.

Note

MAC Address and MTU Considerations

Note 1: You can leave the automatic generated MAC address (this is the default), or change it manually.

Note 2: The Hypervisor MTU should be higher or equal to the Guest VM, otherwise, the packets may be dropped. You may modify “Set Guest OS MTU change” to allow changing MTU from guest. This step is applicable only if this feature is supported by the driver.

8. Power on the VM

9. Open the VM command line and make sure that you have the interface connected.

  • On the guest VM install the OS NVIDIA driver (OFED, WinOF ...).
  • Configure the IP Address and check Network connectivity.

Troubleshooting

1. At least one more VF must be configured on the firmware than is configured on the driver. In our example we had eight VFs configured on the firmware while four are configured on the driver.

2. mlxconfig must be performed for each PCI device (adapter). In parallel, in the driver the configuration is per module, which means that it will be applicable for all adapters installed on the server.

3. Make sure the VM version is Rel. 10 or above, and upgrade it if needed by accessing the Compatibility section (otherwise SR-IOV will not appear as option in network adapter selection).


Done !

Authors

Include Page
SA:Boris Kovalev
SA:Boris Kovalev


Related Documents

Content by Label
showLabelsfalse
showSpacefalse
sortcreation
cqllabel in ("esxi","sr-iov")