Changes and New Feature History
This section includes history of changes and new feature of 3 major releases back. For older releases history, please refer to the relevant firmware versions.
Feature/Change | Description |
40.45.1020 | |
PCIe Gen6 - DRS | PCIe Gen 6 – DRS (Data Reuse and Replay) improves data reliability at high speeds. It works alongside error correction to efficiently resend only the necessary parts of data when errors occur, helping maintain high performance and low latency in PCIe 6.0’s faster, more complex signaling. |
NC-SI Path-Through | NC-SI Path-Through enables direct network traffic forwarding between a host system (such as a server) and a management controller (like a BMC) via a network interface card (NIC), particularly in scenarios involving SmartNICs. |
Bug Fixes | See Bug Fixes in this Firmware Version section. |
Feature/Change | Description |
40.44.1036 | |
Static Split 8x100G ConnectX-8 to Spectrum-4 with SM Modules | A static split of 8x100G channels from a ConnectX-8 SuperNIC to a Spectrum-4 switch allows the system to use Single Mode (SM) optical modules for high-speed data transmission across a long-distance fibber link. This setup is typically used in high-performance networks where there is a need for high throughput (e.g., 800G in total bandwidth) with low latency, such as in data centers or high-performance computing environments. |
DOCA Telemetry | DOCA Telemetry enables users to monitor and collect data related to the performance, health, and behavior of systems or applications running on DOCA. To optimize for a faster sampling period, it is recommended to configure all PCIe-related Diagnostic Data IDs sequentially, one after another to prevent a prolonged sampling period. |
PCIe Switch fwreset | Added support for a new synchronized flow, including a tool and driver, to perform a fwreset on setups with a PCIe switch configuration. |
PTP | Unified PTP is now supported across different VFs on the same PF. |
Dual-Mode Temperature Compensated Crystal Oscillator (DC-TCXO) and Synchronous Ethernet (SyncE) Source | DC-TCXO is used now as the source of timing for SyncE, providing an accurate and stable clock for the synchronized operation of network devices that rely on Ethernet for timing. |
DPA Application Signing | Allows DOCA applications signed with OEM/NVIDIA certificate private keys to be loaded onto the DPA engine, after the OEM/NVIDIA root certificates are installed on the NIC. |
Data-Path Accelerator (DPA) | The DPA hardware version is now exposed as a new capability, labeled "dpa_platform_version." |
Block SMP Traffic | Added a new NV config (SM_DISABLE, default 0) which, when enabled, blocks SMP traffic that does not originate from the SM. |
Dynamic Long Cables | Added the ability to set cable length as a parameter in the PFCC access register. The cable length is used in the calculation of RX lossless buffer parameters, including size, Xoff, and Xon thresholds. |
Bug Fixes | See Bug Fixes in this Firmware Version section. |
Feature/Change | Description |
40.44.0212 | |
Segment on PCIe Switch | Added support for Segment on PCIe switch. |
AER on PCIe Switch Bridge | Added support for AER on PCIe switch bridge. |
Bug Fixes | See Bug Fixes in this Firmware Version section. |
Feature/Change | Description |
40.44.0208 | |
General | This is the initial firmware release of NVIDIA® ConnectX®-8 SuperNIC. ConnectX-8 has the same feature set as ConnectX-7 adapter card. For the list of the ConnectX-7 firmware features, please see ConnectX-7 Firmware Release Notes. The features described here are new features in addition to the ConnectX-7 set. |
Link Speed | NVIDIA® ConnectX®-8 SuperNIC supports 800Gb/s or XDR IB or 2 x 400GbE link speeds. Note: 800GbE link speed is not supported on a single port. |
Planarized Topology Network | ConnectX®-8 SuperNIC uses planarized topology network to reach Extended Data Rate (XDR) performance. |
Direct NIC-GPU Datapath | To read/write data directly from the GPU and to overcome grace CPU PCIe bandwidth issue a direct NIC-GPU datapath is required. To do so, the HCA exposes a side DMA engine as an additional PCIe function which is called “Data Direct”. This additional DMA engine allows vHCA access data buffers using MKEY through it, providing multiple PCIe data path interfaces. Such behavior is needed in a scenario where different memory region requires different PCIe data path, i.e NUMA (Non Uniform Memory Access) systems. A vHCA is allowed to use a Data Direct function if It supports only the following fields: |
Congestion Control | Congestion Control provides performance isolation when multiple applications running on the same cluster. Additionally, it prevents congestion spreading when there is a slow receiver, reduce latency in the cluster, improves fairness, prevents parking-lot effects and packet's drop in lossy networks. |
Multiple Encapsulation/Decapsulation Operation on a Packet | This capability enables the encapsulation table to be opened on both the FDB and the NIC tables together. |
Crypto Algorithms | Extended the role-based authentication to cover all crypto algorithms. Now the |
RoCE: Adaptive Timer | Enabled ADP timer to allow the user to configure RC or DC qp_timeout values lower than 16. |
Multiple-Window in DPA Mode | Multi-window capability is now supported in DPA mode. |
Doorbell Less QP | The new capability enables the user to send a queue without a doorbell record. To create a doorbell less QP/SP, set |
Packet's Flow Label Fields | The |
ODP Event | The following prefetch fields are available ODP event: pre_demand_fault_pages, post_demand_fault_pages |
Jump from NIC_TX to FDB_TX | The user can jump from |