Supported Features/Capabilities

Warning

New VMware OS releases include all features from previous releases unless otherwise specified in this document.

Feature/Change Description

DPU Only

Support Added in OS

Supported in Version

DPU Offloads Enable/Disable

DPU offloads can be now enabled/disabled using the "disable_dpu_accel" module parameter. The following are the new parameter's values:

  • 0 - Full acceleration (Default)

  • 1 - No acceleration, classic NIC mode

Note: The same value must be used for both the host and the Arm side

Yes*

ESXi 8.0

4.23.1.0-vmw

NVIDIA Proprietary Flow Statistics

Statistical information on flows and actions that can be queried by the user using the nmlxcli tool.

For further information, please refer to the User Manual and nmlxcli.

Yes*

ENS VXLAN Encap/Decap Offload

VXLAN hardware encapsulation and decapsulation offload enables the traditional offloads to be performed on the encapsulated traffic. With NVIDIA BlueField-2 SmartNICs, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

The VXLAN encapsulation and decapsulation offload in VMware is done using the NSX manager and the FPO interface which can issue FPO rules to instruct the NVIDIA hardware to perform the VXLAN encapsulation and decapsulation offload.

Yes*

NSX Edge Acceleration

Added support for running NSX edge as VM form factor on ESXi using accelerated DVS.

Yes*

Hardware Accelerated LAG

Added support for hardware accelerated LAG. When two uplinks are added for a virtual switch, a hardware LAG is created between the two uplinks to provides connectivity between the Virtual Functions on different Physical Functions. Additionally, it allows teaming policies to be set on the LAG to facilitate teaming actions such as failover.

Yes*

Hardware Accelerated Uniform Pass-Through (UPT)

UPT interface is provided by the DPU to the host accelerating data-path directly to the hardware.

Yes*

ENS Flow Processing Offload (FPO) Model 3 Support

The driver can now support ENS FPO Model 3 (i.e. support SR-IOV with logical L2 and L3 offloaded to E-Switch and support UPT).

Yes*

Reading Temperature Sensors

Enables the driver to read the temperature from private statistics.

To see the temperature, run:

# nsxdp-cli ens uplink stats get -n vmnic1 | grep -i asicSensorTemperature

No

Classic NIC Mode

In embedded CPU mode, when no ENS is present, the default miss rule will forward all traffic to the x86 PF.

Yes*

Hardware Accelerated RoCE Encapsulation

Added support for Hardware Accelerated GENEVE and VXLAN encapsulation and decapsulation for RoCE traffic.

No

Hardware Accelerated Packet Capture

Hardware accelerated flows can now be mirrored using the standard packet capture tools.

No

Hardware Accelerated NSX Distributed Firewall

Added the ability to offload NSX Distributed Firewall rules by using in-hardware tracking of packet flows.

No

Receive Side Scaling (RSS) for ENS Model 0 and Model 1

RSS support for ENS Model 1 improves performance using fewer CPU cores.

This capability can be enabled using the "netq_rss_ens" module parameter.

No

vSAN over RDMA

vSAN over RoCE brings performance improvements for vSAN technology, offloads the CPU from performing the data communication tasks, significantly boosts overall storage access performance, and enables massive amount of data transfers.

No

ESXi 7.0u2

4.19.16.10-vmw

Adapter Cards

Added support for NVIDIA BlueField-2 adapter cards

Yes

Software DCBx

Data Center Bridging (DCB) uses DCBX to exchange configuration information with directly connected peers. DCBX operations can be configured to set PFC or ETS values.

No

ESXi 7.0u1

4.19.16.8-vmw

RDMA Native Endpoint Support

Added support for RDMA communication with RDMA Native endpoints.
RDMA Native endpoints are RDMA capable devices such as storage arrays that do not use the PVRDMA adaptor type (non-PVRDMA endpoints).

No

Adapter Cards

Added support for NVIDIA ConnectX-6 Dx adapter cards

No

Differentiated Services Code Point (DSCP)

DSCP is a mechanism used for classifying network traffic on IP networks. It uses the 6-bit Differentiated Services Field (DS or DSCP field) in the IP header for packet classification purposes. Using Layer 3 classification enables you to maintain the same classification semantics beyond local network, across routers.

Every transmitted packet holds the information allowing network devices to map the packet to the appropriate 802.1Qbb CoS. For DSCP based PFC or ETS the packet is marked with a DSCP value in the Differentiated Services (DS) field of the IP header.

No

ESXi 7.0

4.19.16.7-vmw

Insufficient Power Detection

Enables the driver to report when Insufficient PCI power is detected.

No

PVRDMA Namespace

Creates RDMA resources with a specified ID.

No

Adapter Cards/SmartNIC

Added support for NVIDIA ConnectX-6 and NVIDIA BlueField cards.

No

Dynamic RSS

Improves network performance by allowing OS Load Balancer better

RSS RX queue utilization during heavy traffic of the same type.

For further information, see section "Dynamic RSS" in the User Manual.

No

ESXi 6.7u2

4.17.13.1-vmw

Packet Capture Utility (Sniffer)

Packet Capture utility duplicates all traffic, including RoCE, in its raw Ethernet form (before stripping) to a dedicated "sniffing" QP, and then passes it to an ESX drop capture point.
It allows gathering of Ethernet and RoCE bidirectional traffic via pktcap-uw and viewing it using regular Ethernet tools, e.g. Wireshark.

For further information, see section "Packet Capture Utility" in the User Manual.

No

SR-IOV max_vfs module parameter Type Modification

Changed the type of the SR-IOV max_vfs module parameter from a single integer value to an array of unsigned integers.

For further information, refer to the User Manual.

No

ESXi 6.7

4.17.9.12-vmw

DCBX Negotiation Support for PFC

PFC port configuration can now be auto-negotiated with switches that support the DCBX protocol.

No

ESXi CLI

ESXi CLI support for ESXi 6.7

No

Geneve Stateless Offload

Geneve network protocol is encapsulated into IP frame (L2 tunneling). Encapsulation is suggested as a means to alter the normal IP routing for datagrams, by delivering them to an intermediate destination that would otherwise not be selected based on the (network part of the) IP Destination Address field in the original IP header.

No

Remote Direct Memory Access (RDMA)

Remote Direct Memory Access (RDMA) is the remote memory management capability that allows server-to-server data movement directly between application memory without any CPU involvement.

Note: It is recommended to use RoCE with PFC enabled in driver and network switches.

For how to enable PFC in the driver see section "Priority Flow Control (PFC)" in the User Manual.

No

Set Link Speed

Enables you to set the link speed to a specific link speed supported by ESXi.

For further information, see section “Set Link Speed” in the User Manual.

No

Priority Flow Control (PFC)

Applies pause functionality to specific classes of traffic on the Ethernet link.

For further information, see section “Priority Flow Control (PFC)” in the User Manual.

No

NetQ RSS

Allows the user to configure multiple hardware queues backing up the sin- gle RX queue. NetQ RSS improves vMotion performance and multiple streams of IPv4/IPv6 TCP/UDP/IPSEC bandwidth over single interface between the Virtual Machines.

For further information, see section “NetQ RSS” in the User Manual.

No

Default Queue RSS (DRSS)

Allows the user to configure multiple hardware queues backing up the default RX queue. DRSS improves performance for large scale multicast traffic between hypervisors and Virtual Machines interfaces.

For further information, see section “Default Queue Receive Side Scaling (DRSS)” in the User Manual.

No

SR-IOV

Single Root IO Virtualization (SR-IOV) is a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus.

No

Support for up to 8 ConnectX-4 ports and up to 16 VFs. For further information, refer to the User Manual

No

RX/TX Ring Resize

Allows the network administrator to set new RX\TX ring buffer size.

No

VXLAN Hardware Stateless Offloads

Added support for VXLAN hardware offload.

VXLAN hardware offload enables the traditional offloads to be performed on the encapsulated traffic. With ConnectX®-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

No

NetDump

Enables a host to transmit diagnostic information via the network to a remote netdump service, which stores it on disk. Network-based coredump collection can be configured in addition to or instead of disk-based core-dump collection.

No

NetQueue

NetQueue is a performance technology in VMware ESXi that significantly improves performance in Ethernet virtualized environments.

No

Wake-on-LAN (WoL)

Allows a network administrator to remotely power on a system or to wake it up from the sleep mode.

No

Hardware Offload
  • Large Send Offload (TCP Segmentation Offload)

  • RSS (Device RSS)

No

Hardware Capabilities
  • Multiple Tx/Rx rings

  • Fixed Pass-Through

  • Single/Dual port

  • MSI-X

No

Ethernet Network
  • TX/RX checksum

  • Auto moderation and Coalescing

  • VLAN stripping offload

No

* Note: Supported only on VMware certified Monterey system.

Last updated on Sep 8, 2023.