Changes and New Feature History
This section includes history of changes and new feature of 3 major releases back. For older releases history, please refer to the relevant firmware versions.
Feature/Change | Description |
20.32.1010 | |
GMP Classes | Added support for blocking unwanted GMP classes by dedicated MADs. |
QP Resources | Added a new NvConfig parameter LOG_MAX_QUEUE to set the maximum number of work queue resources (QP, RQ, SQ...) that can be created per function. The default value is 2^17. |
Congestion Control Key | Added a Congestion Control Key to all Congestion Control MADs to authenticate that they are originated from a trusted source. |
SMP Firewall | Added an SMP firewall to block the option of sending SMPs (MADS sent on QP0 from the Subnet Manager) from unauthorized hosts to prevent fake SMPs from being recognized as the SM. |
Vendor Specific MADs: Class 0x9 | Vendor Specific MADs Class 0x9 is no longer supported by the firmware. If case the firmware detects such MAD, the firmware will return a "NOT SUPPORTED" error to the user. |
TLS/XTS/Signature Padding | Blocked the VF's ability to use both padding and signature in order to prevent the NIC from hanging. |
Asserts' Severity Level | Added 3 new assert filters (Health buffer, NVlog, FW trace). The assert will be exposed now if its severity level is equal to or above the new filter. The filters are configurable by the ini file. The "Health buffer" filter is also configurable by new access register. |
VUID VPD Virtio | An emulated PCI device can be hot plugged/unplugged by the DPU software stack. However, the life cycle and the state of the bare metal host system where an emulated PCI device is plugged in, is not in control of the DPU software stack. PCI BDF may not be available in corner cases, hence, an emulation PCI device handler (VUID) is required which is predictable and stable (across emulation controller reset/restart, across DPU warm reboot). The VUID will show in PCI PF device VPD as [VU] section. |
Rate Limit per VM instead of VM-TC | Enabled Rate Limit per VM instead of VM-TC. This capability is implemented by adding support to a new Scheduling element type: rate limit elements that will connect to the rate_limit and will share its rate limit. |
Dynamically Connected Transport (DCT) with Adaptive Routing (AR) | Performance improvements in the DCT with AR flow by exposing a hint to the software in DCI software context that indicates that RDMA WRITE on this DCI is not supported. |
Dynamic Timeout Mechanism | Added support for dynamic timeout mechanism when in InfiniBand mode. |
QSHR Access Register | Added support for QSHR access register to enable Set and Query rate limit per-host per-port. |
New Software Steering ICM Resource for VXLAN Encapsulation | The firmware now exposes a new Software Steering ICM resource for VXLAN encap expand in order for the SW Steering to manage this resource directly. |
Asymmetrical VFs per PF | Added support for asymmetrical VFs per PF. To enable it: |
mlxlink Support to read/write Access Registers by LID | Added 2 new MAD access registers to enable mlxlink to read/write access registers by LID (to the whole subnet). |
VXLAN Encapsulation Expansion | Enabled the exposure of new ICM resource to the software steering for VXLAN encapsulation expansion. |
Bug Fixes | See Bug Fixes. |
20.31.1014 | |
Using NC-SI Commands for Debugging PCI Link Failures | Implemented a new NC-SI command get_debug_info to get mstdump via the NC-SI protocol to debug a device if the PCI link fails for any given reason. |
Enable/Disable RDMA via the UEFI HII System Settings | Added support for Enabling/Disabling NIC and RDMA (port/partition) via the UEFI HII system settings. Note: Values set in this option only take effect when is Ethernet mode. |
NC-SI Speed Reporting | Updated the NC-SI speed reporting output to support 200GbE speed. Now when running the NC-SI command, the output presents 200GbE speed as well. |
Increased the Maximum Number of MSIX per VF | Increased the maximum number of MSIX per VF to 127. |
Asymmetrical MSIX Configuration | This feature allows the device to be configured with a different number of MSIX vectors per physical PCI functions. To use this feature, please follow these steps:
Notes:
|
RDMA, NC-SI | Added support for RDMA partitioning and RDMA counters in IB mode. |
Adaptive Routing (AR): multi_path, data_in_order | Added a new bit ("data_in_order") to query the QP and allow a process/library to detect when the AR is enabled. |
flex_parser for GENEVE Hardware Offloadand ICMP | Added a new flex parser to support GENEVE hardware offloadand ICMP. |
Non-Page-Supplier-FLR | When the non-page-supplier-FLR funcion is initiated, the firmware triggers a page event to the page supplier to indicate that all pages should be returned for the FLR function. Pages are returned by the driver to the kernel without issuing the MANAGE_PAGES commands to the firmare. |
PCIe Eye Opening | Enabled measuring PCIe eye dynamic grading over PCIe Gen3 speed. |
User Memory (UMEM) | Enabled UID 0 to create resources with UMEM. |
Native IB Packets | Added support for receiving and sending native IB packets from/to the software (including all headers) via raw IBL2 QPs. |
InfiniBand Packet Steering | Added support for RX RDMA NIC flow table on an IB port. Now the software can steer native IB packets to raw IB receive queues according to the DLID and the DQPN. |
Steering | Added support for matching field ipv4_ihl in create_flow_group and set_flow_table_entry commands. |
Bug Fixes | See .Bug Fixes History vxx.32.2000 section. |
20.30.1004 | |
PAM4 | Added support for PAM4 Auto Negotiation and Link Training in 200GbE link speed. |
RoCE, Lossy, slow_restart_idle | Removed triggering unexpected internal CNPs for RoCE Lossy slow_restart_idle feature. |
KR-Startup in Auto-Negotiation | Enabled KR-Startup in Auto-Negotiation mode for PAM4. |
Performance: Steering | Added support for a new NV config mode “icm_cache_mode_large_scale_steering” that enables less cache misses and improves performance for cases when working with many steering rules. |
Active-State Power Management (ASPM) | Added support for power saving in L1 ASPM link state. |
VF/VF-group rate-limiting | This new capability enables VF/VF-group rate-limiting while per-host rate-limiter is also applied. |
Bug Fixes | See .Bug Fixes History vxx.32.2000 section. |
20.29.2002 | |
Remote Loopback in NRZ | [Beta] Enabled remote (PMA) loopback in NRZ, Rx-to-Tx. Note: To use the PMA loopback, both sides should be in Force Mode (AN Disabled). |
PAM4 | PAM4 200GbE linkup time improvement, the linkup time is now sub 5 seconds. |
Reserved QPN | [Beta] This capability allows the software to reserve a QPN that can be used to establish connection performed over RDMA_CM, and provide the software a unique QP number. Since RDMA_CM does not support DC, by using CREATE_QPN_RESERVED_OBJECT the software can reserve a QPN value from the firmware's managed QP number namespace range. This allows multiple software processes to hold a unique QPN value instead of using UD-QPs. |
Bug Fixes | See .Bug Fixes History vxx.32.2000 section. |
20.29.1016 | |
Cable Firmware Burning | [Beta] Added support for LinkX module burning via MFT toolset. The new capability enables direct firmware burning from the internal flash storage to reduce the bandwidth and accelerate the burning process, including burning several modules at a time. |
Eye-Opening | [Beta] Eye-opening is supported only when using NRZ signal. |
Multi-Application QoS per QP | Added the option to allow applications to build their own QoS tree over the NIC hierarchy by connecting QPs to responder/requestor Queue Groups. |
NRZ Link Performance | Improved NRZ link performance (RX algorithm). |
NRZ Link-Up Time | Improved NRZ link-up time (25G\50G\100G speeds). |
Tx Sets | Enabled the options to control different Tx sets for the same attribute when connecting a Mellanox-Mellanox vs Mellanox to 3rd party HCA. |
InfiniBand Support in RDE | Added "InfiniBand" properties set to the Network Device Function Redfish object. |
Direct Packet Placement (DPP) | Added support for Direct Packet Placement (DPP). DPP is a receive side transport service in which the Ethernet packets are scattered to the memory according to a packet sequence number (PSN) carried by the packet, and not by their arrival order. To enable DPP offload, the software should create a special RQ by using the CREATE_RQ command, and set DPP relevant attributes. |
HW Offloads Enablement on VF | Added trust level for VFs. Once the VF is trusted, it will get a set of trusted capabilities. |
Enabling Adaptive-Routing (AR) for the Right SL via UCX | UCX can now enable AR by exposing Out-Of-Ordering bitmask per SL with "ooo_per_sl" field in the HCA_VPORT context. It can be also queried by running the QUERY_HCA_VPORT_CONTEXT command. |
InfiniBand Congestion Control | Enhanced IB Congestion Control to support lower minimum rate. Now it uses destination-lid to classify flows to handle larger scale, and achieve better results in GPCNeT benchmark. |
Steering Dump | Hardware steering dump output used for debugging and troubleshooting. Please see Known Issue 2213356 for its limitations. |
20.28.4000 | |
PAM4 | PAM4 link performance improvement. |
Ethernet wqe_too_small Mode | Added a new counter per vPort that counts the number of packets that reached the Ethernet RQ but cannot fit into the WQE due to their large size. Additionally, we added the option to control if such packet will cause “CQE with Error” or “CQE_MOCK”. |
Access Registries | ignore_flow_level is now enabled by the TRUST LEVEL access registry. |
Pause Frames from VFs | [Beta] Enabled the capability to allow Virtual Functions to send Pause Frames packets. |
Counters | Added support for the cq_overrun counter. The counter represents the number of times CQs enter an error state due to overflow that occur when the device tries to post a CQE into a full CQ buffer. |
Bug Fixes | See Bug Fixes. |
20.28.2006 | |
Sub Function (SF) BAR Size | Increased the minimum Sub Function (SF) BAR size from 128KB to 256KB. Due to the larger SF BAR size, for the same PF BAR2 size, which can be queried/modified by LOG_PF_BAR2_SIZE NV config, the firmware will support half of the SFs. To maintain the same amount of supported SFs, software needs to increase the LOG_PF_BAR2_SIZE NV config value by 1. |
AES-XTS | AES_XTS is used to perform all disk encryption/decryption related flows in the NIC and reduce cost and overheads of the related FIPS certification. |
GPUDirect in Virtualized Environment | Enabled a direct access to ATS from the NIC to GPU buffers using PCIe peer-to-peer transactions. To enable this capability, the “p2p_ordering_mode” parameter was added to the NV_PCI_CONF configuration. |
Non-Volatile Configurations | Added a new Non-Volatile Configuration parameter to control VL15 buffer size (VL15_BUFFER_SIZE). Note: VL15 buffer size enlargement will decrease all other VLs buffers size. |
NC-SI | Added a new NC-SI command (get_device_id) to report a unique device identifier. |
NC-SI | Added new NC-SI commands (get_lldp_nb, set_lldp_nb) to query the current status of LLDP and to enable/disable it. |
ROCE ACCL | Split the SlowRestart ROCE_ACCL into the following:
|
ROCE ACCL | Enabled TX PSN window size configuration using LOG_TX_PSN_WINDOW NVconfig parameter. Note: Due to hardware limitations, max log_tx_psn_win value can be set 9. |
Bug Fixes | See Bug Fixes. |
20.28.2006 | |
Sub Function (SF) BAR Size | Increased the minimum Sub Function (SF) BAR size from 128KB to 256KB. Due to the larger SF BAR size, for the same PF BAR2 size, which can be queried/modified by LOG_PF_BAR2_SIZE NV config, the firmware will support half of the SFs. To maintain the same amount of supported SFs, software needs to increase the LOG_PF_BAR2_SIZE NV config value by 1. |
AES-XTS | AES_XTS is used to perform all disk encryption/decryption related flows in the NIC and reduce cost and overheads of the related FIPS certification. |
GPUDirect in Virtualized Environment | Enabled a direct access to ATS from the NIC to GPU buffers using PCIe peer-to-peer transactions. To enable this capability, the “p2p_ordering_mode” parameter was added to the NV_PCI_CONF configuration. |
Non-Volatile Configurations | Added a new Non-Volatile Configuration parameter to control VL15 buffer size (VL15_BUFFER_SIZE). Note: VL15 buffer size enlargement will decrease all other VLs buffers size. |
NC-SI | Added a new NC-SI command (get_device_id) to report a unique device identifier. |
NC-SI | Added new NC-SI commands (get_lldp_nb, set_lldp_nb) to query the current status of LLDP and to enable/disable it. |
ROCE ACCL | Split the SlowRestart ROCE_ACCL into the following:
|
ROCE ACCL | Enabled TX PSN window size configuration using LOG_TX_PSN_WINDOW NVconfig parameter. Note: Due to hardware limitations, max log_tx_psn_win value can be set 9. |
Bug Fixes | See Bug Fixes. |
20.28.1002 | |
EDR Link in ConnectX-6 100Gb/s cards | EDR link speed is now supported when using ConnectX-6 100Gb/s HCA and connecting with HDR optical cables. |
NC-SI 1.2 New Commands | Implemented the following new commands from NS-SI 1.2 specification:
|
NC-SI | Added support for Virtual node GUID, and set & get address through the NC-SI commands. |
Error Injection Port Level | Added the ability to inject iCRC/vCRC port level error using Port Transmit Error Register (PTER). |
In-Node Sync | Added support for in-node sync. |
IPoIB Virtualization Updates | Added the following IPoIB Virtualization updates:
|
MPFS Forwarding Packets Behavior | This new feature defines the forwarding behavior in MPFS for packets arriving from the network (uplink) with destination MAC address that does not appear in the MPFS FDB. The new feature is configured by a new NV configuration (UNKNOWN_UPLINK_MAC_FLOOD) which when enabled, floods all local MPFS ports with these packets, otherwise drops these packets. |
Hardware Tag Matching | Increased the maximum XRQ number to 512. |
Non-Volatile Configurations (NVCONFIG) | Added the following new mlxconfig parameters to the Non-Volatile Configurations section.
|
Bug Fixes | See Bug Fixes. |
Feature/Change | Description |
20.27.1016 | |
Link Protocol | Due to in a change in link protocol in 100GbE and 200GbE adapter cards (from PAM4 to NRZ), the link may not come up on certain configurations. For limitations related to this change, see issue 2094355. |