NVIDIA BlueField-2 SNAP for NVMe and Virtio-blk v3.8.0
NVIDIA BlueField-2 SNAP for NVMe and Virtio-blk v3.8.0

SNAP Installation

DPU Image Installation

The BlueField OS image (BFB) includes all needed packages for mlnx_snap to operate: MLNX_OFED, RDMA-CORE libraries, and the supported SPDK version, libsnap and mlnx-snap headers, libraries and binaries.

To see which operating systems are supported, refer to the BlueField Software Documentation under Release Notes → Supported Platforms and Interoperability → Supported Linux Distributions.

RShim must be installed on the host to connect to the NVIDIA® BlueField® DPU. To install RShim, please follow the instructions described in the BlueField Software Documentation → BlueField DPU SW Manual → DPU Operation → DPU Bring-up and Driver Installation → Installing Linux on DPU → Step 1: Set up the RShim Interface.

Use RShim interface from the x86 host machine to install the desired image:

Copy
Copied!
            

BFB=/<path>/latest-bluefield-image.bfb cat $BFB > /dev/rshim0/boot

Optionally, it is possible to connect to the remote console of the DPU and watch the progress of the installation process. Using the screen tool, for example:

Copy
Copied!
            

screen /dev/rshim0/console


Post-installation Configuration

Firmware Configuration

Refer to Firmware Configuration to confirm your FW configuration matches your SNAP application requirements (SR-IOV support, MSI-X resources, etc).

Network Configuration

Before enabling mlnx_snap or configuring it, users must first verify that uplink ports are configured correctly, and that network connectivity toward the remote target works properly.

By default, two SF interfaces are opened—one over each PF as configured in /etc/mellanox/mlnx-sf.conf—which match RDMA devices mlx5_2 and mlx5_3 respectively. As mentioned, only these interfaces may support RoCE/RDMA transport for the remote storage.

If working with an InfiniBand link, an active InfiniBand port must be made available to allow for InfiniBand support. Once an active IB port is available, users must configure the port RDMA device in the JSON configuration file (see rdma_device under "Configuration File Examples" for mlnx_snap to work on that port).

If working with bonding, it is transparent to MLNX SNAP configuration, and no specific configuration is necessary on NVMe/virtio-blk SNAP level.

Out-of-box Configuration

NVMe/virito-blk SNAP is disabled by default. Once enabled (see section "Firmware Configuration"), the out-of-box configuration of NVMe/virtio-blk SNAP includes a single NVMe controller, backed by a 64MB RAM-based SPDK block device (e.g. RAM drive) in non-offload mode. Out-of-box configuration does not include virtio-blk devices.

A sample configuration file for the out-of-box NVMe controller is located in /etc/mlnx_snap/mlnx_snap.json. For additional information about its values, please see section "Non-offload Mode".

The default initialization command set is described in /etc/mlnx_snap/spdk_rpc_init.conf and /etc/mlnx_snap/snap_rpc_init.conf, as follows:

  • spdk_rpc_init.conf

    Copy
    Copied!
                

    bdev_malloc_create 64 512

  • snap_rpc_init.conf

    Copy
    Copied!
                

    subsystem_nvme_create Mellanox_NVMe_SNAP "Mellanox NVMe SNAP Controller" controller_nvme_create mlx5_0 --subsys_id 0 --pf_id 0 controller_nvme_namespace_attach -c NvmeEmu0pf0 spdk Malloc0 1

Note

BlueField out-of-box configuration is slightly different than BlueField-2. For a clean out-of-box experience, /etc/mlnx_snap/snap_rpc_init.conf is a symbolic link pointing to the relevant HW-oriented configuration.

Note

To make any other command set persistent, users may update and modify /etc/mlnx_snap/spdk_rpc_init.conf and /etc/mlnx_snap/snap_rpc_init.conf for according to their needs. Refer to .SNAP Commands v3.5.0 for more information.


In order to enable, start, stop, or check the status of mlnx_snap service, run:

Copy
Copied!
            

systemctl {start | stop | status} mlnx_snap

mlnx_snap application output is captured by the SystemD and stored in the internal database. Users are able to get the output from the service console by using the following SystemD commands:

  • systemctl status mlnx_snap

  • journalctl -u mlnx_snap

SystemD keeps logs in a binary format under the /var/run/log/journal/ directory, which is stored on the tmpfs (i.e. it is not persistent).

SystemD redirects log messages into the "rsyslog" service. Configuration of rsyslog is default to CentOS/RHEL, so users may find all those messages in the /var/log/messages directory.

Note that rsyslog daemon could be configured to send messages to a remote (centralized) syslog server if desired.

© Copyright 2024, NVIDIA. Last updated on May 21, 2024.