Changes and New Features

NVIDIA ConnectX-5 Adapter Cards Firmware Release Notes v16.35.3502 LTS

Security Hardening Enhancements: This release contains important reliability improvements and security hardening enhancements. NVIDIA recommends upgrading your devices firmware to this release to improve the devices’ firmware security and reliability.


SR-IOV - Virtual Functions (VF) per Port - The maximum Virtual Functions (VF) per port is 127. For further information, see RoCE Limitations.


It is recommended to enable the “above 4G decoding” BIOS setting for features that require large amount of PCIe resources.

Such features are: SR-IOV with numerous VFs, PCIe Emulated Switch, and Large BAR Requests.




Single PF per NUMA

Added support for BMC with a single PF per NUMA in Socket-Direct adapter cards.

Note: SIngle PF per NUMA should not be enabled on Multi-host.

OpenSNAPI Communication Channel

The communication channel is used to enable communication between processes on different vHCAs regardless of their network connectivity state.

RegC Register

Exposed an additional steering register in the hardware (reg_c_6).

QP Resources

Added a new NvConfig parameter LOG_MAX_QUEUE to set the maximum number of work queue resources (QP, RQ, SQ...) that can be created per function.

The default value is 2^17.

Congestion Control Key

Added a Congestion Control Key to all Congestion Control MADs to authenticate that they are originated from a trusted source.

SMP Firewall

Added an SMP firewall to block the option of sending SMPs (MADS sent on QP0 from the Subnet Manager) from unauthorized hosts to prevent fake SMPs from being recognized as the SM.

Vendor Specific MADs: Class 0x9

Vendor Specific MADs Class 0x9 is no longer supported by the firmware. If case the firmware detects such MAD, the firmware will return a "NOT SUPPORTED" error to the user.

Match Definer Object

Added support for a new steering match definer format (format 33).

Graceful Teardown

The teardown of hotplugged emulated device (a.k.a unplug flow) is in the reverse order of the plug flow. However, certain legacy host software stack does not support surprise removal of the PCIe PF devices.

To support such host software stack, emulation manager software will perform a graceful teardown.

AES-XTS Encryption / Decryption

Enabled disk encryption services using the aes_xts protocol to allow inline data encryption and decryption towards a remote or a local disk/NVDIM.

TLS/XTS/Signature Padding

Blocked the VF's ability to use both padding and signature in order to prevent the NIC from hanging.

Asserts' Severity Level

Added 3 new assert filters (Health buffer, NVlog, FW trace). The assert will be exposed now if its severity level is equal to or above the new filter.

The filters are configurable by the ini file. The "Health buffer" filter is also configurable by new access register.


Added support for clock frequency synchronization based on Synchronous Ethernet protocol.

Note: This capability is not supported with link speeds of 50G and higher, and cannot run in parallel with diagnostic counters.

Socket-Direct Adapter Cards

Added support for:

  • Flow Direct

  • LACP

  • GRE Offload

Rate Limit per VM instead of VM-TC

Enabled Rate Limit per VM instead of VM-TC. This capability is implemented by adding support to a new Scheduling element type: rate limit elements that will connect to the rate_limit and will share its rate limit.

Cross GVMI Memory Key

Cross GVMI memory key is used to allow cross GVMI memory access using indirect memory registration which crosses vHCA context.

Steering LAG Mode (Hash LAG)

[Beta] The new LAG mode (PORT_SELECT_FT LAG (hash LAG)) distributes the packets to ports according to the hash on the packet headers, instead of distributing the packets according to the QP (queue affinity – legacy LAG) to avoid cases where the slow/fast path packets are transmitted from different ports.

Identifying the right port is done by using destination type UPLINK with destination_eswitch_owner_vhca_id_valid set and destination_eswitch_owner_vhca_id indicating the PF associated with the port.

The below are the Queue Affinity and Steering LAG (hash) limitations:

  • Queue Affinity (legacy LAG) limitation:

    • LAG cannot be created when other functions on eSwitch are active (VFs, SFs, and x86 PF for SmartNIC).
      Note: This limitation does not exist in older firmware versions (xxx.31.1014 and below) when setting mlxconfig HIDE_PORT2_PF parameter for SmartNICs. This solution is no longer applicable when using firmware version xx.31.2xxx.

  • Steering LAG (hash) limitations:

    • LAG cannot be created when other functions on eSwitch are active (VFs, SFs, and x86 PF for smartnic)
      Note: This limitation can be solved by setting LAG_RESOURCE_ALLOCATION=1

    • Cannot create PORT_SELECT_FT LAG when the SQs on PF (connected to strict TIS) are opened.
      Note: This limitation can be solved by setting LAG_RESOURCE_ALLOCATION=1

      Note: It is recommended to set LAG_RESOURCE_ALLOCATION=1 before configuring the PORT_SELECT_FT lag. LAG_RESOURCE_ALLOCATION will pre-allocate needed firmware and hardware resources in order to CREATE_LAG with less limitations.

    • Can be used only with Software Steering due to a bug with strict SQs in Firmware Steering.

Note: Due to changes in this feature, transmission timestamp in CQE is temporarily unsupported with multi eSwitch.

QSHR Access Register

Added support for QSHR access register to enable Set and Query rate limit per-host per-port.

New Software Steering ICM Resource for VXLAN Encapsulation

The firmware now exposes a new Software Steering ICM resource for VXLAN encap expand in order for the SW Steering to manage this resource directly.

Asymmetrical VFs per PF

Added support for asymmetrical VFs per PF.

To enable it:
PF_NUM_OF_VF_VALID must be true, and PF_NUM_OF_VF to a none zero value.

mlxlink Support to read/write Access Registers by LID

Added 2 new MAD access registers to enable mlxlink to read/write access registers by LID (to the whole subnet).

Virtio-net Full Emulation

Enabled the option to dynamically modify the MSIX and the number of virtio VF device queues.

Note: This modification must be done before loading the driver on the device.

This new capability includes the following limitations:

  • total queue/msix number can not exceed 2k

  • queue/msix per virtio vf device cannot exceed 64

  • the scale of virtio device is limit to below 127 from mlxconfig

Bug Fixes

See Bug Fixes in this Firmware Version section.

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.