image image image image image

On This Page

Mellanox ConnectX®-4/ConnectX-5 NATIVE ESXi is a software stack which operates across all Mellanox network adapter solutions supporting up to Gb/s Ethernet (ETH) and 2.5 or 5.0 GT/s PCI Express 2.0 and 3.0 uplinks to servers.

The following sub-sections briefly describe the various components of the Mellanox ConnectX- 4/ConnectX-5 NATIVE ESXi stack.

nmlx5 Driver

nmlx5 is the low-level driver implementation for the ConnectX-4/ConnectX-5 adapter cards designed by Mellanox Technologies. ConnectX-4/ConnectX-5 adapter cards can operate as an InfiniBand adapter, or as an Ethernet NIC. The ConnectX-4/ConnectX-5 NATIVE ESXi driver supports Ethernet NIC configurations exclusively. 

Mellanox NATIVE ESXi Package

Software Components

MLNX-NATIVE-ESX-ConnectX-4/ConnectX-5 contains the following software components:

  • Mellanox Host Channel Adapter Drivers
    • nmlx5_core (Ethernet): Handles Ethernet specific functions and plugs into the ESXi uplink layer

Module Parameters

Module Parameters

To set nmlx5_core parameters: 

esxcli system module parameters set -m nmlx5_core -p <parameter>=<value>

To show the values of the parameters:

esxcli system module parameters list -m  <module name>

For the changes to take effect, reboot the host.

nmlx5_core Module Parameters 



NameDescriptionValues
DRSS

Number of hardware queues for Default Queue (DEFQ) RSS.

Note: This parameter replaces the previously used “drss” parameter which is now obsolete.

  • 2-16
  • 0 - disabled

 

When this value is != 0, DEFQ RSS is enabled with 1 RSS Uplink queue that manages the 'drss' hardware queues.

Notes:

  • The value must be a power of 2.
  • The value must not exceed num. of CPU cores.
  • Setting the DRSS value to 16, sets the Steering Mode to device RSS
ecn

Enables the ECN feature

  • 1 - enabled
  • 0 - disabled (Default)
enable_nmlx_debug

Enables debug prints for the core module.

  • 1 - enabled
  • 0 - disabled (Default)
max_vfs

max_vfs is an array of comma separated integer values, that represent the amount of VFs to open from each port.

For example: max_vfs = 1,1,2,2, will open a single VF per port on the first NIC and 2 VFs per port on second NIC. The order of the NICs is determined by pci SBDF number.

Note: VFs creation based on the system resources limitations.

  • 0 - disabled (Default)

N number of VF to allocate over each port

Note: The amount of values provided in the max_vfs array should not exceed the supported_num_ports module parameter value.

mst_recovery

Enables recovery mode (only NMST module is loaded).

  • 1 - enabled
  • 0 - disabled (Default)
pfcrx

Priority based Flow Control policy on RX.

  • 0-255
  • 0 - default

It is an 8 bits bit mask, where each bit indicates a priority [0-7].

Bit values:

  • 1 - respect incoming PFC pause frames for the specified priority.
  • 0 - ignore incoming pause frames on the specified priority.

Note: The pfcrx and pfctx values must be identical.

pfctx

Priority based Flow Control policy on TX.

  • 0-255
  • 0 - default

It is an 8 bits bit mask, where each bit indicates a priority [0-7].

Bit values:

  • 1 - generate pause frames according to the RX buffer threshold on the specified priority.
  • 0 - never generate pause frames on the specified priority.

Note: The pfcrx and pfctx values must be identical.

RSS

Number of hardware queues for NetQ RSS.

Note: This parameter replaces the previously used “rss” parameter which is now obsolete.

  • 2-8
  • 0 - disabled

When this value is != 0, NetQ RSS is enabled with 1 RSS uplink queue that manages the 'rss' hardware queues.

Notes:

  • The value must be a power of 2
  • The maximum value must be lower than the number of CPU cores.
supported_num_ports

Sets the maximum supported ports.

2-8

Default 4

Note: Before installing new cards, you must modify the maximum number of the supported ports to include the additional new ports.