Release Notes Change Log History
PXE boot option is enabled automatically and is available for the ConnectX and OOB network interfaces
Added Vendor Class option "BF2Client" in DHCP request for PXE boot to identify card
Updated the "force PXE" functionality to continue to retry PXE boot entries until successful. A configuration called "boot override retry" has been added. With this configured, UEFI does not rebuild the boot entries after all boot options are attempted but loops through the PXE boot options until booting is successful. Once successful, the boot override entry configuration is disabled and would need to be reenabled for future boots.
Added ability to change the CPU clock dynamically according to the temperature and other sensors of the DPU. If the power consumption reaches close to the maximum allowed, the software module decreases the CPU clock rate to ensure that the power consumption does not cross the system limit.
WarningThis feature is relevant only for OPNs MBF2H516C-CESOT, MBF2M516C-EECOT, MBF2H516C-EESOT, and MBF2H516C-CECOT.
Bug fixes
Added ability to perform warm reboot on BlueField-2 based devices
Added support for DPU BMC with OpenBMC
Added support for NVIDIA Converged Accelerator (900-21004-0030-000)
Added beta-level support for embedded BMC DPUs (OPNs MBF2H512C-AECOT , MBF2H512C-AESOT , MBF2M355A-VECOT, MBF2M345A-VENOT, and MBF2M355A-VESOT ). Please contact NVIDIA Support for a compatible BFB image.
Added support for Queue Affinity and Hash LAG modes
Configurable UEFI PXE/DHCP vendor-class option that allows users to configure the DHCP class identifier in bf.cfg for PXE which can be used by the DHCP server to assign IP addresses or boot images. Usage:
Add PXE_DHCP_CLASS_ID=<string_identifier_up_to_9_chracters> to bf.cfg, then push the bf.cfg together with the BFB; or
Add the line PXE_DHCP_CLASS_ID=<string_identifier_up_to_9_chracters> in /etc/bf.cfg inside Arm Linux and run the command bfcfg
Added VPI support so ports on the same DPU may run traffic in different protocols (Ethernet or InfiniBand)
Added ability to hide network DPU ports from host
SFs are no longer supported using mdev (mediated device) interface. Instead, they are managed using the new "mlxdevm" tool located in /opt/Mellanox/iproute2/sbin/mlxdevm.
SF representor names have been named pf0sf0, pf1sf0, etc. However, to better identify SF representors based on the user-defined SF number and to associate them with their parent PCIe device, they are now identified as en3f0pf0sf0, en30pf1sf0, etc.
Added support for SF QoS and QoS group via mlxdevm's rate commands. Run man mlxdevm port for details.
Added ability to disable host networking physical functions
Added GA-level support for VirtIO-net Emulated Devices
Shared RQ mode is now enabled by default
To reduce boot time, systemd-networkd-wait-online.service and networking.service have been configured with a 5-second timeout:
# cat /etc/systemd/system/systemd-networkd-wait-online.service.d/override.conf [Service] ExecStart= ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --timeout=5 # cat /etc/systemd/system/networking.service.d/override.conf [Service] TimeoutStartSec= TimeoutStartSec=5 ExecStop= ExecStop=/sbin/ifdown -a --read-environment --exclude=lo --force --ignore-errors
Added support for DOCA SDK v1.0
Enhanced password protection forcing unique password generation on first access
Added support for secure boot enablement on supported DPUs
Increased scaling to 504 VFs with SR-IOV and Virtio-net functionality
Memory optimizations for large scale:
Sharing receive queues between all representor ports on the DPU
70% reduction of memory for each virtual/physical function opened on the host
30% reduction of memory for steering rules offloaded to the embedded switch
Added support for BlueField SNAP hotplug
Added support for Virtio-BLK Bonding
Added support for quality of service when hardware LAG is enabled
Bug fixes
Changed default SmartNIC configuration to embedded function mode
OVS is configured to allow traffic to and from the host by default
Running RDMA applications is enabled by default both from the DPU and the host
UCX is included as part of the DPU SW image
Improved image installation script
Added support for Geneve tunnel HW offload for OVS-DPDK
Added GA-level support for IPsec full offload
Integration of StrongSwan with IPSec HW offload
Integration of StrongSwan with with OpenSSL and HW Public Key Acceleration engine
Added support for VirtIO-net device emulation
HotPlug support for VirtIO-blk and VirtIO net devices
Up to 127 VirtIO virtual functions support
Added beta-level support for RegEx acceleration
Added alpha-level support for deep packet inspection
Added support for CentOS 8.2 and Debian 10 as BlueField DPU OS