Changes and New Features in 3.9.3
- Added support for live migration of VirtIO-net and VirtIO-blk VFs from one VM to another. Requires working with the new vDPA driver.
- OS configuration – enabled tmpfs in
Changes and New Features in 3.9.2
- Added support for Arm host
Enroll new NVIDIA certificates to DPU UEFI database
Important: User action required! See known issue #3077361 for details.
Changes and New Features in 3.9.0
This is the last release to offer GA support for first-generation NVIDIA® BlueField® DPUs.
- Added support for NIC mode of operation
- Added password protection to change boot parameters in GRUB menu
- Added IB support for DOCA runtime and dev environment
- Implemented RShim PF interrupts
- Virtio-net-controller is split to 2 processes for fast recovery after service restart
- Added support for live virtio-net controller upgrade instead of performing a full restart
- Expanded BlueField-2 PCIe bus number range to 254 (0-253)
Added a new CAP field,
log_max_queue_depth(value can be set to
4K), to indicate the maximal NVMe SQ and CQ sizes supported by firmware. This can be used by NVMe controllers or by non-NVMe drivers which do not rely on NVMe CAP field.
Added ability for the RShim driver to still work when the host is in secure boot mode
bfb-infocommand which provides the breakdown of the software components bundled in the BFB package
- Added support for rate limiting VF groups
Changes and New Features in 3.8.5
- PXE boot option is enabled automatically and is available for the ConnectX and OOB network interfaces
- Added Vendor Class option "BF2Client" in DHCP request for PXE boot to identify card
- Updated the "force PXE" functionality to continue to retry PXE boot entries until successful. A configuration called "boot override retry" has been added. With this configured, UEFI does not rebuild the boot entries after all boot options are attempted but loops through the PXE boot options until booting is successful. Once successful, the boot override entry configuration is disabled and would need to be reenabled for future boots.
Added ability to change the CPU clock dynamically according to the temperature and other sensors of the DPU. If the power consumption reaches close to the maximum allowed, the software module decreases the CPU clock rate to ensure that the power consumption does not cross the system limit.
This feature is relevant only for OPNs MBF2H516C-CESOT, MBF2M516C-EECOT, MBF2H516C-EESOT, and MBF2H516C-CECOT.
- Bug fixes
Changes and New Features in 3.8.0
- Added ability to perform warm reboot on BlueField-2 based devices
- Added support for DPU BMC with OpenBMC
- Added support for NVIDIA Converged Accelerator (900-21004-0030-000)
Changes and New Features in 3.7.1
- Added beta-level support for embedded BMC DPUs (OPNs MBF2H512C-AECOT, MBF2H512C-AESOT, MBF2M355A-VECOT, MBF2M345A-VENOT, and MBF2M355A-VESOT). Please contact NVIDIA Support for a compatible BFB image.
- Added support for Queue Affinity and Hash LAG modes
Configurable UEFI PXE/DHCP vendor-class option that allows users to configure the DHCP class identifier in
bf.cfgfor PXE which can be used by the DHCP server to assign IP addresses or boot images. Usage:
bf.cfg, then push the
bf.cfgtogether with the BFB; or
Add the line
/etc/bf.cfginside Arm Linux and run the command
Changes and New Features in 3.7.0
Added VPI support so ports on the same DPU may run traffic in different protocols (Ethernet or InfiniBand)
- Added ability to hide network DPU ports from host
- SFs are no longer supported using mdev (mediated device) interface. Instead, they are managed using the new "mlxdevm" tool located in
- SF representor names have been named
pf1sf0, etc. However, to better identify SF representors based on the user-defined SF number and to associate them with their parent PCIe device, they are now identified as
Added support for SF QoS and QoS group via mlxdevm's rate commands. Run
man mlxdevm portfor details.
- Added ability to disable host networking physical functions
- Added GA-level support for VirtIO-net Emulated Devices
- Shared RQ mode is now enabled by default
To reduce boot time,
networking.servicehave been configured with a 5-second timeout:
# cat /etc/systemd/system/systemd-networkd-wait-online.service.d/override.conf [Service] ExecStart= ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --timeout=5 # cat /etc/systemd/system/networking.service.d/override.conf [Service] TimeoutStartSec= TimeoutStartSec=5 ExecStop= ExecStop=/sbin/ifdown -a --read-environment --exclude=lo --force --ignore-errors