Introduction
Mellanox ConnectX®-4/ConnectX-5 NATIVE ESXi is a software stack which operates across all Mellanox network adapter solutions supporting up to Gb/s Ethernet (ETH) and 2.5 or 5.0 GT/s PCI Express 2.0 and 3.0 uplinks to servers.
The following sub-sections briefly describe the various components of the Mellanox ConnectX- 4/ConnectX-5 NATIVE ESXi stack.
nmlx5 is the low-level driver implementation for the ConnectX-4/ConnectX-5 adapter cards designed by Mellanox Technologies. ConnectX-4/ConnectX-5 adapter cards can operate as an InfiniBand adapter, or as an Ethernet NIC. The ConnectX-4/ConnectX-5 NATIVE ESXi driver supports Ethernet NIC configurations exclusively.
Software Components
MLNX-NATIVE-ESX-ConnectX-4/ConnectX-5 contains the following software components:
Mellanox Host Channel Adapter Drivers
nmlx5_core (Ethernet): Handles Ethernet specific functions and plugs into the ESXi uplink layer
Module Parameters
To set nmlx5_core parameters:
esxcli system module parameters set -m nmlx5_core -p <parameter>=<value>
To show the values of the parameters:
esxcli system module parameters list -m <module name>
For the changes to take effect, reboot the host.
nmlx5_core Module Parameters
Name |
Description |
Values |
||||
DRSS |
Number of hardware queues for Default Queue (DEFQ) RSS. Note: This parameter replaces the previously used “drss” parameter which is now obsolete. |
When this value is != 0, DEFQ RSS is enabled with 1 RSS Uplink queue that manages the 'drss' hardware queues. Notes:
|
||||
ecn |
Enables the ECN feature |
|
||||
enable_nmlx_debug |
Enables debug prints for the core module. |
|
||||
max_vfs |
max_vfs is an array of comma separated integer values, that represent the amount of VFs to open from each port. For example: max_vfs = 1,1,2,2, will open a single VF per port on the first NIC and 2 VFs per port on second NIC. The order of the NICs is determined by pci SBDF number. Note: VFs creation based on the system resources limitations. |
N number of VF to allocate over each port Note: The amount of values provided in the max_vfs array should not exceed the supported_num_ports module parameter value. |
||||
mst_recovery |
Enables recovery mode (only NMST module is loaded). |
|
||||
pfcrx |
Priority based Flow Control policy on RX. |
It is an 8 bits bit mask, where each bit indicates a priority [0-7]. Bit values:
Note: The pfcrx and pfctx values must be identical. |
||||
pfctx |
Priority based Flow Control policy on TX. |
It is an 8 bits bit mask, where each bit indicates a priority [0-7]. Bit values:
Note: The pfcrx and pfctx values must be identical. |
||||
RSS |
Number of hardware queues for NetQ RSS. Note: This parameter replaces the previously used “rss” parameter which is now obsolete. |
When this value is != 0, NetQ RSS is enabled with 1 RSS uplink queue that manages the 'rss' hardware queues. Notes:
|
||||
supported_num_ports |
Sets the maximum supported ports. |
2-8 Default 4 Note: Before installing new cards, you must modify the maximum number of the supported ports to include the additional new ports. |