Changes and New Features
The following are the new features and changes that were added in this version. The supported adapter cards are specified as follows:
Supported Cards |
Description |
All HCAs |
Supported in the following adapter cards unless specifically stated otherwise: ConnectX-4/ConnectX-4 Lx/ConnectX-5/ConnectX-6/ConnectX-6 Dx/ConnectX-6 Lx/ConnectX-7/BlueField-2 |
ConnectX-6 Dx and above |
Supported in the following adapter cards unless specifically stated otherwise: ConnectX-6 Dx/ConnectX-6 Lx/ConnectX-7/BlueField-2 |
ConnectX-6 and above |
Supported in the following adapter cards unless specifically stated otherwise: ConnectX-6/ConnectX-6 Dx/ConnectX-6 Lx/ConnectX-7/BlueField-2 |
ConnectX-5 and above |
Supported in the following adapter cards unless specifically stated otherwise: ConnectX-5/ConnectX-6/ConnectX-6 Dx/ConnectX-6 Lx/ConnectX-7/BlueField-2 |
ConnectX-4 and above |
Supported in the following adapter cards unless specifically stated otherwise: ConnectX-4/ConnectX-4 Lx/ConnectX-5/ConnectX-6/ConnectX-6 Dx/ConnectX-6 Lx/ConnectX-7/BlueField-2 |
For a list of features from previous versions see Release Notes Change Log History section.
Feature/Change |
Description |
24.04-0.6.6.0 |
|
Live Migration |
Added support for Live Migration. Live migration refers to the process of moving a guest virtual machine (VM) running on one physical host to another host without disrupting normal operations or causing any downtime or other adverse effects for the end user. For further information, see SR-IOV Live Migration. |
Linux Bridge Multicast Offload |
Added support for multicast control path packet snooping (IGMP/MLD) and offload of MDB notifications in order to replicate multicast traffic to multiple destinations in the hardware. |
Netdev Interface Maximum Channels |
Increased the netdev maximum channels from 128 to 256 for systems with a high number of cores. |
Page Management via Kernel's Page Pool |
Enhanced performance by removing the internal Rx path from the internal page cache, thus allowing the driver to always use the kernel's page_pool. |
XDP Enhancements |
Enhanced performance by adding the following modifications in XDP:
|
DC MRA Caps |
Increased the maximum outstanding DC atomic reads to 32 to enable the driver to handle reads larger than 8K. Now log_max_ra_{res/req}_dc is exposed in DV API, enabling the user to see both RC and DC values. |
SyncE userspace Support through Linux kernel DPLL Subsystem |
Added support for Linux kernel DPLL subsystem as a mechanism for working with clock signals in NVIDIA's proprietary synchronous ethernet protocol daemon. This new mechanism enables the use of both VFs and SFs, and is supported starting from korg6.8. |
Queued MADs Received |
Limited the number of MADs received on the wire that are queued in the kernel as they are waiting for the user-space application to 200k per umad file. By that, preventing malicious or faulty behavior of flooding a node with MADs and exhausting its memory. |
Bridge Debuggability Extensions |
Added the option to expose offloaded FDB state (flags, counters, etc.) via debugfs for debugging purposes. $ cat mlx5/0000\:08\:00.0/esw/bridge/bridge1/fdb DEV MAC VLAN PACKETS BYTES LASTUSE FLAGS enp8s0f0_1 e4:0a:05:08:00:06 2 2 204 4295567112 0x0 enp8s0f0_0 e4:0a:05:08:00:03 2 3 278 4295567112 0x0 |
General |
Customer Affecting Change |
Description |
Dynamic EQ |
When an EQ is not created, the default affinity value is empty, therefore the "show" command output will show an empty affinity. |
MLNX_OFED Verbs API Migration
As of MLNX_OFED v5.0 release (Q1 of the year 2020), MLNX_OFED Verbs API have migrated from the legacy version of user space verbs libraries (libibervs, libmlx5, etc.) to the Upstream version rdma-core.
For the list of MLNX_OFED verbs APIs that have been migrated, refer to Migration to RDMA-Core document.