NVIDIA Native Drivers for VMware ESXi Inbox Drivers Release Notes
NVIDIA Native Drivers for VMware ESXi Inbox Drivers Release Notes

Known Issues

The following is a list of general limitations and known issues of the various components of the Inbox Driver release.

Internal Ref.

Issue

3553625

Description: SR-IOV device passthrough with classic NIC mode is currently not supported.

Workaround: N/A

Keywords: SR-IOV, NIC mode

Adapter Cards / DPU: NVIDIA BlueField-3 / NVIDIA BlueField-2

Available in OS: ESXi 8.0U3 ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0

Discovered in Version: 4.23.0.36

-

Description: TX packets rate drops when sending CoS (PCP/801.2p) traffic with different VLAN priorities to the same TX queue.

The following message might be shown:

NSX 4959 FABRIC [nsx@6876 comp="nsx-edge" subcomp="datapathd" s2comp="stats" tname="stats17" level="WARN" eventId="vmwNSXTxBufferStatus"] {"event_state":0,"event_external_reason":"TX Buffer overflow","event_src_comp_id":"808fc0bb-7bd0-4087-9560-c4186d44ed87","event_sources":{"interface_name":"fp-eth0","overflow_per":"19.177"}}

Workaround: To resolve the issue, apply the following module parameter configuration to enable the driver to map all priorities to the same traffic class.

Copy
Copied!
            

[ESXi shell] esxcfg-module -i nmlx5_core   trust_dscp_same_priority: uint    Port policy to enable TRUST_DSCP and map all priorities to the same class.    Values : 0 - DISABLED, 1 - ENABLED    Default: 0   [ESXi shell] esxcfg-module -s 'trust_dscp_same_priority=1' nmlx5_core [ESXi shell] reboot

The module parameter is available in the following nmlx5_core drivers.

  • Starting from ESXi 8.0u2 inbox driver

  • ESXi 7.0u3 driver version 4.22.73.1006+

Keywords: TX packets rate

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0U3 ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

-

Description: The nmlx5_core driver supports a maximum of 8 uplinks attached at the same time for all ESXi versions up to 8.0u3.

Workaround: N/A

Keywords: Uplinks

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

3609728

Description: DPU mode is not supported in BlueField-3.

Workaround: N/A

Keywords: BlueField-3, DPU mode

Adapter Cards / DPU: NVIDIA BlueField-3 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0 U2P3

Discovered in Version: 4.23.0.66

3432688

Description: The PCI device with device ID 0xc2d1 (BlueField Auxiliary Comm Channel) is used as the communication channel between the DPU and Host and is essential for SmartNIC operation. Therefore, it must not be enabled for passthrough.

Workaround: N/A

Keywords: BlueField, communication channel, DPU, Host, passthrough

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0 U2

Discovered in Version: 4.23.0.66

3588655

Description: LRO is not supported in UPT.

Workaround: N/A

Keywords: UPT, LRO, Performance

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0

Discovered in Version: 4.23.0.66

3343206

Description: Universal Pass Through (UPT) tunneling is supported only when using ESXi 8.0u2 with the following firmware driver version combinations.

Other combination are not recommended and can lead to driver's issues.

  • Firmware: xx.36.1010

  • Driver: 4.24.0.1-1vmw

Workaround: N/A

Keywords: UPT tunneling

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0

Discovered in Version: 4.23.0.66

3442918

Description: There is no VLAN traffic when setting VLAN in the VM for a VF interface which performs PCI passthrough (not an SR-IOV passthru) to the VM.

Workaround: N/A

Keywords: VLAN

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

3136502

Description: In case of low/no memory in the system, the driver may get stuck in an endless loop trying to allocate memory.

Workaround: N/A

Keywords: System memory

Adapter Cards / DPU: ConnectX-6 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

3436579

Description: Performance degradation might be experienced when running with LRO enabled in the VM due to a vmxnet3 driver bug.

Workaround: To overcome the issue, perform the following:

  1. Update to OS with kernel 6.3 and newer which contains the vmxnet3 driver fix: 3bced313b9a5 vmxnet3: use gro callback when UPT is enabled.

  2. Turn off LRO in the VM.

Keywords: LRO, performance, vmxnet3 driver

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

3390706

Description: The maximum number of VFs for a PF on ConnectX-4 and ConnectX-5 adapter cards is 126 although the OS reports that the device supports 127 VFs.

Workaround: Set the maximum number of VFs for a PF using the nmlx5_core module parameter max_vfs, or by using the ESXi management tools.

Keywords: VF, PF, ConnectX-4, ConnectX-5

Adapter Cards / DPU: ConnectX-4, ConnectX-5

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.66

3344305

Description: Enabling both UPT VFs and SR-IOV VFs on the same host will result in IOMMU fault.

Workaround: N/A

Keywords: UPT SR-IOV

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0

Discovered in Version: 4.23.0.36

2678029

Description: Due to hardware limitations, Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in ConnectX-5 and ConnectX-6 adapter cards.

Workaround: Use ConnectX-6 Lx, ConnectX-6 Dx, or onwards cards that support ENS Model 1 Level 2 and Model 2A.

Keywords: ENS, ConnectX-5/ConnectX-6, Model 1 Level 2 and Model 2A

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 7.0u3, ESXi 7.0u2

Discovered in Version: 4.23.0.36

-

Description: Geneve options length support is limited to 56B. Received packets with options length bigger than 56B are dropped.

Workaround: N/A

Keywords: Geneve

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

-

Description: The hardware can offload only up to 256B of headers.

Workaround: N/A

Keywords: Hardware offload

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2204581

Description: A mismatch between the uplink and the VF MTU values may result in CQE with error.

Workaround: Align the uplink and the VF MTU values.

Keywords: CQE, error, model 2,

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2429623

Description: Enabling sriov_mc_isolation module parameter may result in vmknic and emulated NICs multicast and IPv6 traffic loss.

Workaround: Unset or set the module parameter to 0.

Keywords: Multicast, IPv6, SR-IOV

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2372060

Description: RDMA is not supported in the Hypervisor with ENS model 2.

Workaround: N/A

Keywords: ENS model 2, RDMA

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2139469

Description: Setting the "Allow Guest MTU Change" option in vSphere Client is currently not functional. Although guest MTU changes in SR-IOV are allowed, they do not affect the port's MTU and the guest's MTU remains the same as the PF MTU.

Workaround: N/A

Keywords: MTU, SR-IOV

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

1340255

Description: ECN statistic counters accumulatorsPeriod and ecnMarkedRocePackets display wrong values and cannot be cleared.

Workaround: N/A

Keywords: nmlx5 ecn nmlxcli

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

1340275

Description: ECN tunable parameter initialAlphaValue for the Reaction Point protocol cannot be modified.

Workaround: N/A

Keywords: nmlx5 ecn nmlxcli

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2430662

Description: Card's speed remains zero after port goes down and reboot is performed.

Workaround: Turn the port down and then up again.

Keywords: ConnectX-6 Dx, link speed

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

1514289

Description: RoCE traffic may fail after vMotion when using namespace.

Workaround: N/A

Keywords: Namespace, RoCE, vMotion

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2334405

Description: Legacy SR-IOV is not supported with Model 1.

Workaround: Unset max_vfs or alternatively move to ENS model 0 or Model 2.

Keywords: SR-IOV, ENS

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2449578

Description: When in ENS mode, changing the scheduler to HCLK, may cause traffic loss.

Workaround: N/A

Keywords: ENS, HCLK scheduler

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

746100

Description: The 'esxcli mellanox uplink link info -u ' command reports the 'Auto negotiation' capability always as 'true'.

Workaround: N/A

Keywords: 'Auto negotiation' capability

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

1068621

Description: SMP MADs (ibnetdiscover, sminfo, iblinkinfo, smpdump, ibqueryerr, ibdiagnet and smpquery) are not supported on the VFs.

Workaround: N/A

Keywords: SMP MADs

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine, ConnectX-4 Onwards HCAs

Available in OS: ESXi 8.0 U2, ESXi 8.0 U1, ESXi 8.0, ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

1446060

Description: Although the max_vfs module parameter range is "0-128", due to firmware limitations, the following are the supported VFs per single port devices:

  • ConnectX-4 / ConnectX-5: up to 127

Workaround: N/A

Keywords: SR-IOV, VFs per port

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in Version: 4.23.0.36

2139469

Description: Setting the "Allow Guest MTU Change" option in vSphere Client is currently not functional. Although guest MTU changes in SR-IOV are allowed, they do not affect the port's MTU and the guest's MTU remains the same as the PF MTU.

Workaround: N/A

Keywords: MTU, SR-IOV

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0

Discovered in Version: 4.23.0.36

2813820

Description: The maximum supported VFs in the case of Model 3 (A and B) setup is 126.

Workaround: Set the maximum value of NUM_OF_VFS to 126 in mlxconfig.

Keywords: VFs, Model 3

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0

Discovered in Version: 4.23.0.36

2615605

Description: LAG creation fails when VFs are enabled on the host.

Workaround: Before adding a second uplink to the switch, make sure all VMs with VFs are powered off.

Keywords: Link Aggregation

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0

Discovered in Version: 4.23.0.36

2611342

Description: On older version of iDRAC, the NC-SI channel is not opened automatically.

Workaround: Run this command from the iDrac root SHEll manually and open the channel after every reboot or Arm OS

libncsitest 47 eth2 0 2 <MAC addr of sd0> <MAC addr of sd0>

Keywords: iDRAC Communication

Adapter Cards / DPU: NVIDIA BlueField-2 for VMware vSphere Distributed Services Engine

Available in OS: ESXi 8.0

Discovered in Version: 4.23.0.36

-

Description: Adapter cards that come with a pre-configured link type as InfiniBand cannot be detected by the driver and cannot be seen by MFT tools. Thus its link type cannot be changed.

Workaround:

  1. Unload the driver.

    unload nmlx5_core module

  2. Make the device visible to MFT by loading the driver in a recovery mode.

    vmkload_mod nmlx5_core mst_recovery=1 kill the devmgr

  3. Check the device available on your machine.

    /opt/mellanox/bin/mst status

  4. Change the link type to Ethernet using MFT.

    opt/mellanox/bin/mlxconfig -d mt4115_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

  5. Power Cycle the host.

Keywords: Link type, InfiniBand, MFT

Adapter Cards / DPU: ConnectX-4 Onwards HCAs

Available in OS: ESXi 7.0u3, ESXi 7.0u2, ESXi 7.0u1, ESXi 7.0

Discovered in versions: 4.17.13.10-vmw, 4.17.13.1-vmw, 4.17.16.8-vmw, 4.17.16.7-vmw, 4.17.9.12-vmw

© Copyright 2024, NVIDIA. Last updated on Feb 6, 2024.