NVIDIA BlueField DPU BSP v3.9.6 LTS
1.0

Release Notes Change Log History

The following are the main topics fixed in this release. Additional minor fixes included.

  • OS BFB installer fixes

  • Platform configuration and initialization fixes

  • VirtIO emulation fixes

  • Server OEM production line fixes

  • Added support for live migration of VirtIO-net and VirtIO-blk VFs from one VM to another. Requires working with the new vDPA driver.

  • OS configuration – enabled tmpfs in /tmp

  • Added support for Arm host

  • Enroll new NVIDIA certificates to DPU UEFI database

    Important

    Important: User action required! See known issue #3077361 for details.

Warning

This is the last release to offer GA support for first-generation NVIDIA® BlueField® DPUs.

  • Added support for NIC mode of operation

  • Added password protection to change boot parameters in GRUB menu

  • Added IB support for DOCA runtime and dev environment

  • Implemented RShim PF interrupts

  • Virtio-net-controller is split to 2 processes for fast recovery after service restart

  • Added support for live virtio-net controller upgrade instead of performing a full restart

  • Expanded BlueField-2 PCIe bus number range to 254 (0-253)

  • Added a new CAP field, log_max_queue_depth (value can be set to 2K/4K), to indicate the maximal NVMe SQ and CQ sizes supported by firmware. This can be used by NVMe controllers or by non-NVMe drivers which do not rely on NVMe CAP field.

  • Added ability for the RShim driver to still work when the host is in secure boot mode

  • Added bfb-info command which provides the breakdown of the software components bundled in the BFB package

  • Added support for rate limiting VF groups

  • PXE boot option is enabled automatically and is available for the ConnectX and OOB network interfaces

  • Added Vendor Class option "BF2Client" in DHCP request for PXE boot to identify card

  • Updated the "force PXE" functionality to continue to retry PXE boot entries until successful. A configuration called "boot override retry" has been added. With this configured, UEFI does not rebuild the boot entries after all boot options are attempted but loops through the PXE boot options until booting is successful. Once successful, the boot override entry configuration is disabled and would need to be reenabled for future boots.

  • Added ability to change the CPU clock dynamically according to the temperature and other sensors of the DPU. If the power consumption reaches close to the maximum allowed, the software module decreases the CPU clock rate to ensure that the power consumption does not cross the system limit.

    Warning

    This feature is relevant only for OPNs MBF2H516C-CESOT, MBF2M516C-EECOT, MBF2H516C-EESOT, and MBF2H516C-CECOT.

  • Bug fixes

  • Added ability to perform warm reboot on BlueField-2 based devices

  • Added support for DPU BMC with OpenBMC​

  • Added support for NVIDIA Converged Accelerator (900-21004-0030-000)

  • Added beta-level support for embedded BMC DPUs (OPNs MBF2H512C-AECOT , MBF2H512C-AESOT , MBF2M355A-VECOT, MBF2M345A-VENOT, and MBF2M355A-VESOT ). Please contact NVIDIA Support for a compatible BFB image.

  • Added support for Queue Affinity and Hash LAG modes

  • Configurable UEFI PXE/DHCP vendor-class option that allows users to configure the DHCP class identifier in bf.cfg for PXE which can be used by the DHCP server to assign IP addresses or boot images. Usage:

    • Add PXE_DHCP_CLASS_ID=<string_identifier_up_to_9_chracters> to bf.cfg, then push the bf.cfg together with the BFB; or

    • Add the line PXE_DHCP_CLASS_ID=<string_identifier_up_to_9_chracters> in /etc/bf.cfg inside Arm Linux and run the command bfcfg

Added VPI support so ports on the same DPU may run traffic in different protocols (Ethernet or InfiniBand)

  • Added ability to hide network DPU ports from host

  • SFs are no longer supported using mdev (mediated device) interface. Instead, they are managed using the new "mlxdevm" tool located in /opt/Mellanox/iproute2/sbin/mlxdevm.

  • SF representor names have been named pf0sf0, pf1sf0, etc. However, to better identify SF representors based on the user-defined SF number and to associate them with their parent PCIe device, they are now identified as en3f0pf0sf0, en30pf1sf0, etc.

  • Added support for SF QoS and QoS group via mlxdevm's rate commands. Run man mlxdevm port for details.

  • Added ability to disable host networking physical functions

  • Added GA-level support for VirtIO-net Emulated Devices

  • Shared RQ mode is now enabled by default

  • To reduce boot time, systemd-networkd-wait-online.service and networking.service have been configured with a 5-second timeout:

    Copy
    Copied!
                

    # cat /etc/systemd/system/systemd-networkd-wait-online.service.d/override.conf [Service] ExecStart= ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --timeout=5   # cat /etc/systemd/system/networking.service.d/override.conf [Service] TimeoutStartSec= TimeoutStartSec=5 ExecStop= ExecStop=/sbin/ifdown -a --read-environment --exclude=lo --force --ignore-errors

© Copyright 2023, NVIDIA. Last updated on Sep 5, 2023.