Building Your Own BFB Installation Image
Users wishing to build their own customized NVIDIA® BlueField® OS image can use the BFB build environment. See this GitHub webpage for more information.
For any customized BlueField OS image to boot on the UEFI secure-boot-enabled DPU (default DPU secure boot setting), the OS must be either signed with an existing key in the UEFI DB (e.g., the Microsoft key), or UEFI secure boot must be disabled. See "Secure Boot" and its subpages for more details.
Running RedHat on BlueField
In general, running RedHat Enterprise Linux or CentOS on BlueField is similar to setting it up on any other ARM64 server.
A driver disk is required to support the eMMC hardware typically used to install the media onto. The driver disk also supports the tmfifo networking interface that allows creating a network interface over the USB or PCIe connection to an external host. For newer RedHat releases, or if the specific storage or networking drivers mentioned are not needed, you can skip the driver disk.
The way to manage boot flow components with BlueField is through grub boot manager. The installation should create a
/boot/efi VFAT partition that holds the binaries visible to UEFI for bootup. The standard grub tools then manage the contents of that partition, and the UEFI EEPROM persistent variables, to control the boot.
It is also possible to use the BlueField runtime distribution tools to directly configure UEFI to load the kernel and initramfs from the UEFI VFAT boot partition if desired, but typically using grub is preferred. In particular, you would need to explicitly copy the kernel image to the VFAT partition whenever it is upgraded so that UEFI could access it; normally it is kept on an XFS partition.
Provisioning ConnectX Firmware
Prior to installing RedHat, you should ensure that the ConnectX SPI ROM firmware has been provisioned. If the BlueField is connected to an external host via PCIe, and is not running in Secure Boot mode, this is typically done by using MFT on the external host to provision the BlueField. If the BlueField is connected via USB or is configured in Secure Boot mode, you must provision the SPI ROM by booting a dedicated bootstream that allows the SPI ROM to be configured by the MFT running on the BlueField Arm cores.
There are multiple ways to access the RedHat installation media from a BlueField device for installation.
- You may use the primary ConnectX interfaces on the BlueField to reach the media over the network.
You may configure a USB or PCIe connection to the BlueField as a network bridge to reach the media over the network.
Requires installing and running the RShim drivers on the host side of the USB or PCIe connection.
You may connect other network or storage devices to the BlueField via PCIe and use them to connect to or host the RedHat install media.
This method has not been tested.
In principle, it is possible to perform the installation according to the second method above without first provisioning the ConnectX SPI ROM, but since you need to do that provisioning anyway, it is recommended to perform it first. In particular, the PCIe network interface available via the external host’s RShim driver is likely too slow prior to provisioning to be usable for a distribution installation.
Managing Driver Disk
NVIDIA provides a number of pre-built driver disks, as well as a documented flow for building one for any particular RedHat version.
Normally a driver disk can be placed on removable media (like a CDROM or USB stick) and is auto-detected by the RedHat installer. However, since BlueField has no removable media slots, you must provide it over the network. Although, if you are installing over the network connection via the PCIe/USB link to an external host, you will not have a network connection either. As a result, the procedure documented is for modifying the default RedHat
images/pxeboot/initrd.img file to include the driver disk itself.
To create the updated
initrd.img, you should locate the
image/pxeboot directory in the RedHat installation media. This will have a kernel image file (vmlinuz) and
initrd.img (initial RAM disk). The
bluefield_dd/update-initrd.sh script takes the path to the
initrd.img as an argument and adds the appropriate BlueField driver disk ISO file to the
When booting the installation media, make sure to include
inst.dd=/bluefield_dd.iso on the kernel command line, which will instruct Anaconda to use that driver disk, enabling the use of the IP over USB/PCIe link (
tmfifo) and the DesignWare eMMC (
Installing Official CentOS Distributions
Contact NVIDIA Enterprise Support for information on the installation of CentOS distributions.
BlueField Linux Drivers
The following table lists the BlueField drivers which are part of the Linux distribution.
|BlueField-specific EDAC driver||✓||✓|
I2C bus driver (
|Driver needed to receive IPMB messages from a BMC and send a response back. This driver works with the I2C driver and a user-space program such as OpenIPMI.||✓||✓|
|Driver needed on the DPU to send IPMB messages to the BMC on the IPMB bus. This driver works with the I2C driver. It only loads successfully if it executes a successful handshake with the BMC.||✓||✓|
|Gigabit Ethernet driver||✓||✓|
|BlueField HCA firmware burning driver. This driver supports burning firmware for the embedded HCA in the BlueField SoC.||✓||✓|
|BlueField PKA kernel module||✓|
|Performance monitoring counters. The driver provides access to available performance modules through the ||✓||✓|
|Kernel driver that provides a debufgs interface for the system software to monitor the BlueField device's power and thermal management parameters.||✓|
|TMFIFO driver for BlueField SoC||✓||✓||✓|
|Boot control driver. This driver provides a ||✓||✓||✓|
Device driver for CPLD
|TRIO driver for BlueField SoC||✓||✓|
|Supports reset or low-power mode handling for BlueField.||✓||✓|
|Allows multiplexing individual GPIOs to switch from the default hardware mode to software-controlled mode.||✓|