DOCA Infrastructure
TBD
RShim Troubleshooting and How-Tos
Another backend already attached
Several generations of BlueField DPUs are equipped with a USB interface in which RShim can be routed, via USB cable, to an external host running Linux and the RShim driver.
In this case, typically following a system reboot, the RShim over USB prevails and the DPU host reports RShim status as "another backend already attached
". This is correct behavior, since there can only be one RShim backend active at any given time. However, this means that the DPU host does not own RShim access.
To reclaim RShim ownership safely:
Stop the RShim driver on the remote Linux. Run:
systemctl stop rshim systemctl disable rshim
Restart RShim on the DPU host. Run:
systemctl enable rshim systemctl start rshim
The "another backend already attached
" scenario can also be attributed to the RShim backend being owned by the BMC in DPUs with integrated BMC. This is elaborated on further down on this page.
RShim driver not loading
Verify whether your DPU features an integrated BMC or not. Run:
# sudo sudo lspci -s $(sudo lspci -d 15b3: | head -1 | awk '{print $1}') -vvv | grep "Product Name"
Example output for DPU with integrated BMC:
Product Name: BlueField-2 DPU 25GbE Dual-Port SFP56, integrated BMC, Crypto and Secure Boot Enabled, 16GB on-board DDR, 1GbE OOB management, Tall Bracket, FHHL
If your DPU has an integrated BMC, refer to RShim driver not loading on host with integrated BMC.
If your DPU does not have an integrated BMC, refer to RShim driver not loading on host on DPU without integrated BMC.
RShim driver not loading on DPU with integrated BMC
RShim driver not loading on host
Access the BMC via the RJ45 management port of the DPU.
Delete RShim on the BMC:
systemctl stop rshim systemctl disable rshim
Enable RShim on the host:
systemctl enable rshim systemctl start rshim
Restart RShim service. Run:
sudo systemctl restart rshim
If RShim service does not launch automatically, run:
sudo systemctl status rshim
This command is expected to display "
active (running)
".Display the current setting. Run:
# cat /dev/rshim<N>/misc | grep DEV_NAME DEV_NAME pcie-0000:04:00.2
This output indicates that the RShim service is ready to use.
RShim driver not loading on BMC
Verify that the RShim service is not running on host. Run:
systemctl status rshim
If the output is
active
, then it may be presumed that the host has ownership of the RShim.Delete RShim on the host. Run:
systemctl stop rshim systemctl disable rshim
Enable RShim on the BMC. Run:
systemctl enable rshim systemctl start rshim
Display the current setting. Run:
# cat /dev/rshim<N>/misc | grep DEV_NAME DEV_NAME usb-1.0
This output indicates that the RShim service is ready to use.
RShim driver not loading on host on DPU without integrated BMC
Download the suitable DEB/RPM for RShim (management interface for DPU from the host) driver.
Reinstall RShim package on the host.
For Ubuntu/Debian, run:
sudo dpkg --force-all -i rshim-<version>.deb
For RHEL/CentOS, run:
sudo rpm -Uhv rshim-<version>.rpm
Restart RShim service. Run:
sudo systemctl restart rshim
If RShim service does not launch automatically, run:
sudo systemctl status rshim
This command is expected to display "
active (running)
".Display the current setting. Run:
# cat /dev/rshim<N>/misc | grep DEV_NAME DEV_NAME pcie-0000:04:00.2
This output indicates that the RShim service is ready to use.
Change ownership of RShim from NIC BMC to host
Verify that your card has BMC. Run the following on the host:
# sudo sudo lspci -s $(sudo lspci -d 15b3: | head -
1
| awk'{print $1}'
) -vvv |grep"Product Name"
Product Name: BlueField-2
DPU 25GbE Dual-Port SFP56, integrated BMC, Crypto and Secure Boot Enabled, 16GB on-board DDR, 1GbE OOB management, Tall Bracket, FHHLThe product name is supposed to show "integrated BMC".
Access the BMC via the RJ45 management port of the DPU.
Delete RShim on the BMC:
systemctl stop rshim systemctl disable rshim
Enable RShim on the host:
systemctl enable rshim systemctl start rshim
Restart RShim service. Run:
sudo systemctl restart rshim
If RShim service does not launch automatically, run:
sudo systemctl status rshim
This command is expected to display "
active (running)
".Display the current setting. Run:
# cat /dev/rshim<N>/misc | grep DEV_NAME DEV_NAME pcie-0000:04:00.2
This output indicates that the RShim service is ready to use.
Connectivity Troubleshooting
Connection (ssh, screen console) to the DPU is lost
The UART cable in the Accessories Kit (OPN: MBF20-DKIT) can be used to connect to the DPU console and identify the stage at which BlueField is hanging.
Follow this procedure:
Connect the UART cable to a USB socket, and find it in your USB devices.
sudo lsusb Bus 002 Device 003: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
NoteFor more information on the UART connectivity, please refer to the DPU's hardware user guide under Supported Interfaces > Interfaces Detailed Description > NC-SI Management Interface.
InfoIt is good practice to connect the other end of the NC-SI cable to a different host than the one on which the BlueField DPU is installed.
Install the minicom application.
OS
Command
CentOS/RHEL
sudo yum install minicom -y
Ubuntu/Debian
sudo apt-get install minicom
Open the minicom application.
sudo minicom -s -c on
Go to "Serial port setup".
Enter "F" to change "Hardware Flow control" to NO.
Enter "A" and change to
/dev/ttyUSB0
and press Enter.Press ESC.
Type on "Save setup as dfl".
Exit minicom by pressing Ctrl + a + z.
Driver not loading in host server
What this looks like in dmsg:
[275604.216789] mlx5_core 0000:af:00.1: 63.008 Gb/s available PCIe bandwidth, limited by 8 GT/s x8 link at 0000:ae:00.0 (capable of 126.024 Gb/s with 16 GT/s x8 link)
[275624.187596] mlx5_core 0000:af:00.1: wait_fw_init:316:(pid 943): Waiting for FW initialization, timeout abort in 100s
[275644.152994] mlx5_core 0000:af:00.1: wait_fw_init:316:(pid 943): Waiting for FW initialization, timeout abort in 79s
[275664.118404] mlx5_core 0000:af:00.1: wait_fw_init:316:(pid 943): Waiting for FW initialization, timeout abort in 59s
[275684.083806] mlx5_core 0000:af:00.1: wait_fw_init:316:(pid 943): Waiting for FW initialization, timeout abort in 39s
[275704.049211] mlx5_core 0000:af:00.1: wait_fw_init:316:(pid 943): Waiting for FW initialization, timeout abort in 19s
[275723.954752] mlx5_core 0000:af:00.1: mlx5_function_setup:1237:(pid 943): Firmware over 120000 MS in pre-initializing state, aborting
[275723.968261] mlx5_core 0000:af:00.1: init_one:1813:(pid 943): mlx5_load_one failed with error code -16
[275723.978578] mlx5_core: probe of 0000:af:00.1 failed with error -16
The driver on the host server is dependent on the Arm side. If the driver on Arm is up, then the driver on the host server will also be up.
Please verify that:
The driver is loaded in the BlueField DPU
The Arm is booted into OS
The Arm is not in UEFI Boot Menu
The Arm is not hanged
Then:
Perform graceful shutdown.
Power cycle on the host server.
If the problem persists, reset nvconfig (
sudo mlxconfig -d /dev/mst/<device> -y reset
) and power cycle the host.NoteIf your DPU is VPI capable, please be aware that this configuration will reset the link type on the network ports to IB. To change the network port's link type to Ethernet, run:
sudo mlxconfig -d <device> s LINK_TYPE_P1=2 LINK_TYPE_P2=2
If this problem persists, please make sure to install the latest bfb image and then restart the driver in host server. Please refer to this page for more information.
No connectivity between network interfaces of source host to destination device
Verify that the bridge is configured properly on the Arm side.
The following is an example for default configuration:
$ sudo ovs-vsctl show
f6740bfb-0312-4cd8-88c0-a9680430924f
Bridge ovsbr1
Port pf0sf0
Interface pf0sf0
Port p0
Interface p0
Port pf0hpf
Interface pf0hpf
Port ovsbr1
Interface ovsbr1
type: internal
Bridge ovsbr2
Port p1
Interface p1
Port pf1sf0
Interface pf1sf0
Port pf1hpf
Interface pf1hpf
Port ovsbr2
Interface ovsbr2
type: internal
ovs_version: "2.14.1"
If no bridge configuration exists, refer to "Virtual Switch on DPU".
Uplink in Arm down while uplink in host server up
Please check that the cables are connected properly into the network ports of the DPU and the peer device.
Performance Degradation
Degradation in performance indicates that openvswitch may not be offloaded.
Verify offload state. Run:
# ovs-vsctl get Open_vSwitch . other_config:hw-offload
If
hw-offload = true
– Fast Pass is configured (desired result)If
hw-offload = false
– Slow Pass is configured
If
hw-offload = false
:
For RHEL/CentOS, run:
# ovs-vsctl set Open_vSwitch . other_config:hw-offload=true; # systemctl restart openvswitch; # systemctl enable openvswitch;
For Ubuntu/Debian, run:
# ovs-vsctl set Open_vSwitch . other_config:hw-offload=true; # /etc/init.d/openvswitch-switch restart
SR-IOV Troubleshooting
Unable to create VFs
Please make sure that SR-IOV is enabled in BIOS.
Verify
SRIOV_EN
is true andNUM_OF_VFS
bigger than 1. Run:# mlxconfig -d /dev/mst/mt41686_pciconf0 -e q |grep -i "SRIOV_EN\|num_of_vf" Configurations: Default Current Next Boot * NUM_OF_VFS 16 16 16 * SRIOV_EN True(1) True(1) True(1)
Verify that
GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on pci=assign-busses"
.
No traffic between VF to external host
Please verify creation of representors for VFs inside the Bluefield DPU. Run:
# /opt/mellanox/iproute2/sbin/rdma link |grep -i up ... link mlx5_0/2 state ACTIVE physical_state LINK_UP netdev pf0vf0 ...
Make sure the representors of the VFs are added to the bridge. Run:
# ovs-vsctl add-port <bridage_name> pf0vf0
Verify VF configuration. Run:
$ ovs-vsctl show bb993992-7930-4dd2-bc14-73514854b024 Bridge ovsbr1 Port pf0vf0 Interface pf0vf0 type: internal Port pf0hpf Interface pf0hpf Port pf0sf0 Interface pf0sf0 Port p0 Interface p0 Bridge ovsbr2 Port ovsbr2 Interface ovsbr2 type: internal Port pf1sf0 Interface pf1sf0 Port p1 Interface p1 Port pf1hpf Interface pf1hpf ovs_version: "2.14.1"
eSwitch Troubleshooting
Unable to configure legacy mode
To set devlink to "Legacy" mode in BlueField, run:
# devlink dev eswitch set pci/0000:03:00.0 mode legacy
# devlink dev eswitch set pci/0000:03:00.1 mode legacy
Please verify that:
No virtual functions are open. To verify if VFs are configured, run:
# /opt/mellanox/iproute2/sbin/rdma link | grep -i up link mlx5_0/2 state ACTIVE physical_state LINK_UP netdev pf0vf0 link mlx5_1/2 state ACTIVE physical_state LINK_UP netdev pf1vf0
If any VFs are configured, destroy them by running:
# echo 0 > /sys/class/infiniband/mlx5_0/device/mlx5_num_vfs # echo 0 > /sys/class/infiniband/mlx5_1/device/mlx5_num_vfs
If any SFs are configured, delete them by running:
/sbin/mlnx-sf -a delete --sfindex <SF-Index>
NoteYou may retrieve the
<SF-Index>
of the currently installed SFs by running:# mlnx-sf -a show SF Index: pci/0000:03:00.0/229408 Parent PCI dev: 0000:03:00.0 Representor netdev: en3f0pf0sf0 Function HWADDR: 02:61:f6:21:32:8c Auxiliary device: mlx5_core.sf.2 netdev: enp3s0f0s0 RDMA dev: mlx5_2 SF Index: pci/0000:03:00.1/294944 Parent PCI dev: 0000:03:00.1 Representor netdev: en3f1pf1sf0 Function HWADDR: 02:30:13:6a:2d:2c Auxiliary device: mlx5_core.sf.3 netdev: enp3s0f1s0 RDMA dev: mlx5_3
Pay attention to the SF Index values. For example:
/sbin/mlnx-sf -a delete --sfindex pci/0000:03:00.0/229408 /sbin/mlnx-sf -a delete --sfindex pci/0000:03:00.1/294944
If the error "Error: mlx5_core: Can't change mode when flows are configured
" is encountered while trying to configure legacy mode, please make sure that
Any configured SFs are deleted (see above for commands).
Shut down the links of all interfaces, delete any
ip xfrm
rules, delete any configured OVS flows, and stop openvswitch service. Run:ip link set dev p0 down ip link set dev p1 down ip link set dev pf0hpf down ip link set dev pf1hpf down ip link set dev vxlan_sys_4789 down ip x s f ; ip x p f ; tc filter del dev p0 ingress tc filter del dev p1 ingress tc qdisc show dev p0 tc qdisc show dev p1 tc qdisc del dev p0 ingress tc qdisc del dev p1 ingress tc qdisc show dev p0 tc qdisc show dev p1 systemctl stop openvswitch-switch
DPU appears as two interfaces
What this looks like:
# sudo /opt/mellanox/iproute2/sbin/rdma link
link mlx5_0/1 state ACTIVE physical_state LINK_UP netdev p0
link mlx5_1/1 state ACTIVE physical_state LINK_UP netdev p1
Check if you are working in legacy mode.
# devlink dev eswitch show pci/0000:03:00.<0|1>
If the following line is printed, this means that you are working in legacy mode:
pci/0000:03:00.<0|1>: mode legacy inline-mode none encap enable
Please configure the DPU to work in switchdev mode. Run:
devlink dev eswitch set pci/0000:03:00.<0|1> mode switchdev
Check if you are working in separated mode:
# mlxconfig -d /dev/mst/mt41686_pciconf0 q | grep -i cpu * INTERNAL_CPU_MODEL SEPERATED_HOST(0)
Please configure the DPU to work in embedded mode. Run:
# mlxconfig -d /dev/mst/mt41686_pciconf0 s INTERNAL_CPU_MODEL=1