NVIDIA Native Drivers for VMware ESXi Inbox Drivers Release Notes
NVIDIA Native Drivers for VMware ESXi Inbox Drivers Release Notes

Supported Features/Capabilities

Note

New VMware OS releases include all features from previous releases unless otherwise specified in this document.

Feature/Change

Description

DPU Only

Support Added in OS

Supported in Version

Maximum Number of Supported Uplinks

Starting from ESXi 8.0u3, the maximum number of uplinks has been increased to 16.

No

ESXi 8.0 U3

4.23.6.2-7vmw

Hardware LRO

[ConnectX-5 and above] Added support for hardware large receive offload in ENS mode.

No

ESXi 8.0 U3

4.23.6.2-7vmw

RSS support for MPLS

Added support for RSS on MPLS encapsulated packets.

No

ESXi 8.0 U3

4.23.6.2-7vmw

UPT Packet filter

Added support for packet filtering using DPDK applications on UPT interfaces.

Yes*

ESXi 8.0 U3

4.23.6.2-7vmw

Dual DPU

Added Hight Availability support by enabling dual DPU.

Yes*

ESXi 8.0 U3

4.23.6.2-7vmw

Increase the default number of queues

Increased the total number of queues to 28, 8 for each default queue RSS.

No

ESXi 8.0 U3

4.23.6.2-7vmw

SR-IOV

Updated the default VF number in VMware vSphere Distributed Services Engine devices to 64.

Yes*

ESXi 8.0 U2

4.23.0.66-1vmw

General

Added support for up to 8 x 100G over DVS (UENS) interfaces.

No

ESXi 8.0 U2

4.23.0.66-1vmw

ENS

Enabled Netqueue RSS for ENS by default.

No

ESXi 8.0 U2

4.23.0.66-1vmw

Communication Channel

The communication channel between x86 architecture and the DPU is now exposed via a separate PCI device.

Yes*

ESXi 8.0 U2

4.23.0.66-1vmw

Adapter Cards

Added support for NVIDIA BlueField-3 DPU at Technical Preview level.

Yes*

ESXi 8.0 U2

4.23.0.66-1vmw

Adapter Cards

Added support for ConnectX-7 adapter cards.

No

ESXi 8.0 U2

4.23.0.66-1vmw

SR-IOV

Added support for ConnectX-7 SR-IOV IB.

No

ESXi 8.0 U2

4.23.0.66-1vmw

DPU Offloads Enable/Disable

DPU offloads can be now enabled/disabled using the "disable_dpu_accel" module parameter. The following are the new parameter's values:

  • 0 - Full acceleration (Default)

  • 1 - No acceleration, classic NIC mode

Note: The same value must be used for both the host and the Arm side

Yes*

ESXi 8.0 /

ESXi 8.0 U1

4.23.0.36-8 vmw / 4.23.0.36-12vmw

NVIDIA Proprietary Flow Statistics

Statistical information on flows and actions that can be queried by the user using the nmlxcli tool.

For further information, please refer to the User Manual and nmlxcli.

Yes*

ENS VXLAN Encap/Decap Offload

VXLAN hardware encapsulation and decapsulation offload enables the traditional offloads to be performed on the encapsulated traffic. With NVIDIA BlueField-2 SmartNICs, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

The VXLAN encapsulation and decapsulation offload in VMware is done using the NSX manager and the FPO interface which can issue FPO rules to instruct the NVIDIA hardware to perform the VXLAN encapsulation and decapsulation offload.

Yes*

NSX Edge Acceleration

Added support for running NSX edge VM with UPT interfaces form factor on ESXi using accelerated DVS.

Yes*

Hardware Accelerated LAG

Added support for hardware accelerated LAG. When two uplinks are added for a virtual switch, a hardware LAG is created between the two uplinks to provides connectivity between the Virtual Functions on different Physical Functions. Additionally, it allows teaming policies to be set on the LAG to facilitate teaming actions such as failover.

Yes*

Hardware Accelerated Uniform Pass-Through (UPT)

UPT interface is provided by the DPU to the host accelerating data-path directly to the hardware.

Yes*

ENS Flow Processing Offload (FPO) Model 3 Support

The driver can now support ENS FPO Model 3 (i.e. support SR-IOV with logical L2 and L3 offloaded to E-Switch and support UPT).

Yes*

Reading Temperature Sensors

Enables the driver to read the temperature from private statistics.

To see the temperature, run:

# nsxdp-cli ens uplink stats get -n vmnic1 | grep -i asicSensorTemperature

No

Classic NIC Mode

In embedded CPU mode, when no ENS is present, the default miss rule will forward all traffic to the x86 PF.

Yes*

Hardware Accelerated RoCE Encapsulation

Added support for Hardware Accelerated GENEVE and VXLAN encapsulation and decapsulation for RoCE traffic.

No

Hardware Accelerated Packet Capture

Hardware accelerated flows can now be mirrored using the standard packet capture tools.

No

Hardware Accelerated NSX Distributed Firewall

Added the ability to offload NSX Distributed Firewall rules by using in-hardware tracking of packet flows.

No

Receive Side Scaling (RSS) for ENS Model 0 and Model 1

RSS support for ENS Model 1 improves performance using fewer CPU cores.

This capability can be enabled using the "netq_rss_ens" module parameter.

No

vSAN over RDMA

vSAN over RoCE brings performance improvements for vSAN technology, offloads the CPU from performing the data communication tasks, significantly boosts overall storage access performance, and enables massive amount of data transfers.

No

ESXi 7.0u2

4.19.16.10-vmw

Adapter Cards

Added support for NVIDIA BlueField-2 adapter cards

Yes

Software DCBx

Data Center Bridging (DCB) uses DCBX to exchange configuration information with directly connected peers. DCBX operations can be configured to set PFC or ETS values.

No

ESXi 7.0u1

4.19.16.8-vmw

RDMA Native Endpoint Support

Added support for RDMA communication with RDMA Native endpoints.

RDMA Native endpoints are RDMA capable devices such as storage arrays that do not use the PVRDMA adaptor type (non-PVRDMA endpoints).

No

Adapter Cards

Added support for NVIDIA ConnectX-6 Dx adapter cards

No

Differentiated Services Code Point (DSCP)

DSCP is a mechanism used for classifying network traffic on IP networks. It uses the 6-bit Differentiated Services Field (DS or DSCP field) in the IP header for packet classification purposes. Using Layer 3 classification enables you to maintain the same classification semantics beyond local network, across routers.

Every transmitted packet holds the information allowing network devices to map the packet to the appropriate 802.1Qbb CoS. For DSCP based PFC or ETS the packet is marked with a DSCP value in the Differentiated Services (DS) field of the IP header.

No

ESXi 7.0

4.19.16.7-vmw

Insufficient Power Detection

Enables the driver to report when Insufficient PCI power is detected.

No

PVRDMA Namespace

Creates RDMA resources with a specified ID.

No

Adapter Cards/SmartNIC

Added support for NVIDIA ConnectX-6 and NVIDIA BlueField cards.

No

Dynamic RSS

Improves network performance by allowing OS Load Balancer better

RSS RX queue utilization during heavy traffic of the same type.

For further information, see section "Dynamic RSS" in the User Manual.

No

ESXi 6.7u2

4.17.13.1-vmw

Packet Capture Utility (Sniffer)

Packet Capture utility duplicates all traffic, including RoCE, in its raw Ethernet form (before stripping) to a dedicated "sniffing" QP, and then passes it to an ESX drop capture point.

It allows gathering of Ethernet and RoCE bidirectional traffic via pktcap-uw and viewing it using regular Ethernet tools, e.g. Wireshark.

For further information, see section "Packet Capture Utility" in the User Manual.

No

SR-IOV max_vfs module parameter Type Modification

Changed the type of the SR-IOV max_vfs module parameter from a single integer value to an array of unsigned integers.

For further information, refer to the User Manual.

No

ESXi 6.7

4.17.9.12-vmw

DCBX Negotiation Support for PFC

PFC port configuration can now be auto-negotiated with switches that support the DCBX protocol.

No

ESXi CLI

ESXi CLI support for ESXi 6.7

No

Geneve Stateless Offload

Geneve network protocol is encapsulated into IP frame (L2 tunneling). Encapsulation is suggested as a means to alter the normal IP routing for datagrams, by delivering them to an intermediate destination that would otherwise not be selected based on the (network part of the) IP Destination Address field in the original IP header.

No

Remote Direct Memory Access (RDMA)

Remote Direct Memory Access (RDMA) is the remote memory management capability that allows server-to-server data movement directly between application memory without any CPU involvement.

Note: It is recommended to use RoCE with PFC enabled in driver and network switches.

For how to enable PFC in the driver see section "Priority Flow Control (PFC)" in the User Manual.

No

Set Link Speed

Enables you to set the link speed to a specific link speed supported by ESXi.

For further information, see section “Set Link Speed” in the User Manual.

No

Priority Flow Control (PFC)

Applies pause functionality to specific classes of traffic on the Ethernet link.

For further information, see section “Priority Flow Control (PFC)” in the User Manual.

No

NetQ RSS

Allows the user to configure multiple hardware queues backing up the sin- gle RX queue. NetQ RSS improves vMotion performance and multiple streams of IPv4/IPv6 TCP/UDP/IPSEC bandwidth over single interface between the Virtual Machines.

For further information, see section “NetQ RSS” in the User Manual.

No

Default Queue RSS (DRSS)

Allows the user to configure multiple hardware queues backing up the default RX queue. DRSS improves performance for large scale multicast traffic between hypervisors and Virtual Machines interfaces.

For further information, see section “Default Queue Receive Side Scaling (DRSS)” in the User Manual.

No

SR-IOV

Single Root IO Virtualization (SR-IOV) is a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus.

No

Support for up to 8 ConnectX-4 ports and up to 16 VFs. For further information, refer to the User Manual

No

RX/TX Ring Resize

Allows the network administrator to set new RX\TX ring buffer size.

No

VXLAN Hardware Stateless Offloads

Added support for VXLAN hardware offload.

VXLAN hardware offload enables the traditional offloads to be performed on the encapsulated traffic. With ConnectX®-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

No

NetDump

Enables a host to transmit diagnostic information via the network to a remote netdump service, which stores it on disk. Network-based coredump collection can be configured in addition to or instead of disk-based core-dump collection.

No

NetQueue

NetQueue is a performance technology in VMware ESXi that significantly improves performance in Ethernet virtualized environments.

No

Wake-on-LAN (WoL)

Allows a network administrator to remotely power on a system or to wake it up from the sleep mode.

No

Hardware Offload

  • Large Send Offload (TCP Segmentation Offload)

  • RSS (Device RSS)

No

Hardware Capabilities

  • Multiple Tx/Rx rings

  • Fixed Pass-Through

  • Single/Dual port

  • MSI-X

No

Ethernet Network

  • TX/RX checksum

  • Auto moderation and Coalescing

  • VLAN stripping offload

No

* Note: Supported only on VMware certified VMware vSphere Distributed Services Engine system.

Driver

Default Queue RSS

Netqueue RSS

Device RSS

4.23.0.66-1vmw

Non ENS

ENS Model0

ENS Model1

ENS Model3 (On host)

Non ENS

ENS Model0

ENS Model1

ENS Model3 (On host)

Non ENS

ENS Model0

ENS Model1

ENS Model3 (On host)

4.23.0.36-12vmw

Non ENS

ENS Model0

ENS Model1

ENS Model3 (On host)

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

ENS Model3 (On host)

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

ENS Model3 (On host)

4.23.0.36-8vmw

Non ENS

ENS Model0

ENS Model1

ENS Model3 (On host)

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

ENS Model3 (On host)

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

ENS Model3 (On host)

4.22.73.1004

Non ENS

ENS Model0

ENS Model1

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

4.21.71.101

Non ENS

ENS Model0

ENS Model1

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

Non ENS

4.21.71.1

Non ENS

ENS Model0

ENS Model1

Non ENS

ENS Model0 (disabled by default)

ENS Model1 (disabled by default)

Non ENS

4.19.71.1

Non ENS

Non ENS

Non ENS

4.19.70.1

Non ENS

Non ENS

Non ENS

4.17.15.16

Non ENS

Non ENS

Non ENS

© Copyright 2024, NVIDIA. Last updated on Sep 23, 2024.