DOCA Documentation v3.1.0

SNAP-4 Service Appendixes

Before configuring SNAP, the user must ensure that all firmware configuration requirements are met. By default, SNAP is disabled and must be enabled by running both common SNAP configurations and additional protocol-specific configurations depending on the expected usage of the application (e.g., hot-plug, SR-IOV, UEFI boot, etc).

After configuration is finished, the host must be power cycled for the changes to take effect.

Note

To verify that all configuration requirements are satisfied, users may query the current/next configuration by running the following:

Copy
Copied!
            

mlxconfig -d /dev/mst/mt41692_pciconf0 -e query

System Configuration Parameters

Parameter

Description

Possible Values

INTERNAL_CPU_MODEL

Enable BlueField to work in internal CPU model

Note

Must be set to 1 for storage emulations.

0/1

SRIOV_EN

Enable SR-IOV

0/1

PCI_SWITCH_EMULATION_ENABLE

Enable PCIe switch for emulated PFs

0/1

PCI_SWITCH_EMULATION_NUM_PORT

The maximum number of hotplug emulated PFs which equals  PCI_SWITCH_EMULATION_NUM_PORT–1. For example, if PCI_SWITCH_EMULATION_NUM_PORT=32, then the maximum number of hotplug emulated PFs would be 31.

Note

One switch port is reserved for all static PFs.

[0,2-32]

Note

SRIOV_EN is valid only for static PFs.


RDMA/RoCE Configuration

By default, BlueField's RDMA/RoCE communication is blocked for its primary OS interfaces (known as ECPFs, typically mlx5_0 and mlx5_1).

If RoCE traffic is required, you must create additional network functions (scalable functions) that support RDMA/RoCE.

Note

This configuration is not required when working over TCP or RDMA/IB.

To enable RoCE interfaces, run the following from within the DPU:

Copy
Copied!
            

[dpu] mlxconfig -d /dev/mst/mt41692_pciconf0 s PER_PF_NUM_SF=1 [dpu] mlxconfig -d /dev/mst/mt41692_pciconf0 s PF_SF_BAR_SIZE=8 PF_TOTAL_SF=2 [dpu] mlxconfig -d /dev/mst/mt41692_pciconf0.1 s PF_SF_BAR_SIZE=8 PF_TOTAL_SF=2


NVMe Configuration

Parameter

Description

Possible Values

NVME_EMULATION_ENABLE

Enable NVMe device emulation

0/1

NVME_EMULATION_NUM_PF

Number of static emulated NVMe PFs

[0-2]

NVME_EMULATION_NUM_MSIX

Number of MSIX assigned to emulated NVMe PFs

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the nvme_controller_list RPC command.

[0-63]

NVME_EMULATION_NUM_VF_MSIX

Number of MSIX per emulated NVMe VF

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the nvme_controller_list RPC command.

Note

This value should match the maximum number of queues assigned to a VF's NVMe SNAP controller through the nvme_controller_create num_queues parameter, as each queue requires one MSIX interrupt.

[0-4095]

NVME_EMULATION_NUM_VF

Number of VFs per emulated NVMe PF

Note

If not 0, overrides NUM_OF_VFS; valid only when SRIOV_EN=1.

[0-256]

EXP_ROM_NVME_UEFI_x86_ENABLE

Enable NVMe UEFI exprom driver

Note

Used for UEFI boot process.

0/1

NVME_EMULATION_MAX_QUEUE_DEPTH

Defines the default maximum queue depth for NVMe I/O queues. The value should be set to the binary logarithm of the desired maximum queue size.

  • A value of 0 (default) limits the queue size to 64.

  • The recommended value is 12, which allows for a queue size of 4096.

[0-12]


Virtio-blk Configuration

Warning

Due to virtio-blk protocol limitations, using bad configuration while working with static virtio-blk PFs may cause the host server OS to fail on boot.

Before continuing, make sure you have configured:

  • A working channel to access Arm even when the host is shut down. Setting such channel is out of the scope of this document. Please refer to NVIDIA BlueField DPU BSP documentation for more details.

  • Add the following line to /etc/nvda_snap/snap_rpc_init.conf:

    Copy
    Copied!
                

    virtio_blk_controller_create –pf_id 0

    For more information, please refer to section "Virtio-blk Emulation Management".

Parameter

Description

Possible Values

VIRTIO_BLK_EMULATION_ENABLE

Enable virtio-blk device emulation

0/1

VIRTIO_BLK_EMULATION_NUM_PF

Number of static emulated virtio-blk PFs

Note

See warning above.

[0-4]

VIRTIO_BLK_EMULATION_NUM_MSIX

Number of MSIX assigned to emulated virtio-blk PFs

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the virtio_blk_controller_list RPC command.

[0-63]

VIRTIO_BLK_EMULATION_NUM_VF_MSIX

Number of MSIX per emulated virtio-blk VF

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the virtio_blk_controller_list RPC command.

Note

This value should match the maximum number of queues assigned to a VF's NVMe SNAP controller through the nvme_controller_create num_queues parameter, as each queue requires one MSIX interrupt.

[0-4095]

VIRTIO_BLK_EMULATION_NUM_VF

Number of VFs per emulated virtio-blk PF

Note

If not 0, overrides NUM_OF_VFS; valid only when SRIOV_EN=1

[0-2000]

EXP_ROM_VIRTIO_BLK_UEFI_x86_ENABLE

Enable virtio-blk UEFI exprom driver

Note

Used for UEFI boot process.

0/1


To configure persistent network interfaces so they are not lost after reboot. Under /etc/sysconfig/network-scripts modify the following four files, or create them if do not exist, then perform a reboot:

Copy
Copied!
            

# cd /etc/sysconfig/network-scripts/ # cat ifcfg-p0 NAME="p0" DEVICE="p0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="none" TYPE=Ethernet MTU=9000   # cat ifcfg-p1 NAME="p1" DEVICE="p1" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="none" TYPE=Ethernet MTU=9000   # cat ifcfg-enp3s0f0s0 NAME="enp3s0f0s0" DEVICE="enp3s0f0s0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="static" TYPE=Ethernet IPADDR=1.1.1.1 PREFIX=24 MTU=9000   # cat ifcfg-enp3s0f1s0 NAME="enp3s0f1s0" DEVICE="enp3s0f1s0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="static" TYPE=Ethernet IPADDR=1.1.1.2 PREFIX=24 MTU=9000

The SNAP source package contains the files necessary for building a container with a custom SPDK.

To build the container:

  1. Download and install the SNAP sources package:

    Copy
    Copied!
                

    [dpu] # dpkg -i /path/snap-sources_<version>_arm64.deb

  2. Navigate to the src folder and use it as the development environment:

    Copy
    Copied!
                

    [dpu] # cd /opt/nvidia/nvda_snap/src

  3. Copy the following to the container folder:

    • SNAP source package – required for installing SNAP inside the container

    • Custom SPDK – to container/spdk. For example:

      Copy
      Copied!
                  

      [dpu] # cp /path/snap-sources_<version>_arm64.deb container/ [dpu] # git clone -b v23.01.1 --single-branch --depth 1 --recursive --shallow-submodules https://github.com/spdk/spdk.git container/spdk

  4. Modify the spdk.sh file if necessary as it is used to compile SDPK.

  5. To build the container:

    • For Ubuntu, run:

      Copy
      Copied!
                  

      [dpu] # ./container/build_public.sh --snap-pkg-file=snap-sources_<version>_arm64.deb

    • For CentOS, run:

      Copy
      Copied!
                  

      [dpu] # rpm -i snap-sources-<version>.el8.aarch64.rpm [dpu] # cd /opt/nvidia/nvda_snap/src/ [dpu] # cp /path/snap-sources_<version>_arm64.deb container/ [dpu] # git clone -b v23.01.1 --single-branch --depth 1 --recursive --shallow-submodules https://github.com/spdk/spdk.git container/spdk [dpu] # yum install docker-ce docker-ce-cli [dpu] # ./container/build_public.sh --snap-pkg-file=snap-sources_<version>_arm64.deb

  6. Transfer the created image from the Docker tool to the crictl tool. Run:

    Copy
    Copied!
                

    [dpu] # docker save doca_snap:<version> doca_snap.tar [dpu] # ctr -n=k8s.io images import doca_snap.tar

    Note

    To transfer the container image to other setups, refer to appendix "Deploying Container on Setups Without Internet Connectivity".

  7. To verify the image, run:

    Copy
    Copied!
                

    [DPU] # crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/doca_snap <version> 79c503f0a2bd7 284MB

  8. Edit the image filed in the container/doca_snap.yaml file. Run:

    Copy
    Copied!
                

    image: doca_snap:<version>

  9. Use the YAML file to deploy the container. Run:

    Copy
    Copied!
                

    [dpu] # cp doca_snap.yaml /etc/kubelet.d/

    Note

    The container deployment preparation steps are required.

When Internet connectivity is not available on a DPU, Kubelet scans for the container image locally upon detecting the SNAP YAML. Users can load the container image manually before the deployment.

To accomplish this, users must download the necessary resources using a DPU with Internet connectivity and subsequently transfer and load them onto DPUs that lack Internet connectivity.

  1. To download the .yaml file:

    Copy
    Copied!
                

    [bf] # wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/doca/doca_container_configs/versions/<path-to-yaml>/doca_snap.yaml

    Note

    Access the latest download command on NGC. The doca_snap:4.1.0-doca2.0.2 tag is used in this section as an example, and the latest tag is also available on NGC.

  2. To download SNAP container image:

    Copy
    Copied!
                

    [bf] # crictl pull nvcr.io/nvidia/doca/doca_snap:4.1.0-doca2.0.2

  3. To verify that the SNAP container image exists:

    Copy
    Copied!
                

    [bf] # crictl images   IMAGE TAG IMAGE ID SIZE nvcr.io/nvidia/doca/doca_snap 4.1.0-doca2.0.2 9d941b5994057 267MB k8s.gcr.io/pause 3.2 2a060e2e7101d 251kB

    Note

    k8s.gcr.io/pause image is required for the SNAP container.

  4. To save the images as a .tar file:

    Copy
    Copied!
                

    [bf] # mkdir images [bf] # ctr -n=k8s.io image export images/snap_container_image.tar nvcr.io/nvidia/doca/doca_snap:4.1.0-doca2.0.2 [bf] # ctr -n=k8s.io image export images/pause_image.tar k8s.gcr.io/pause:3.2

  5. Transfer the .tar files and run the following to load them into Kubelet:

    Copy
    Copied!
                

    [bf] # sudo ctr --namespace k8s.io image import images/snap_container_image.tar [bf] # sudo ctr --namespace k8s.io image import images/pause_image.tar

  6. Now, the image exists in the tool and is ready for deployment.

    Copy
    Copied!
                

    [bf] # crictl images IMAGE TAG IMAGE ID SIZE nvcr.io/nvidia/doca/doca_snap 4.1.0-doca2.0.2 9d941b5994057 267MB k8s.gcr.io/pause 3.2 2a060e2e7101d 251kB

To build SPDK-19.04 for SNAP integration:

  1. Cherry-pick a critical fix for SPDK shared libraries installation (originally applied on upstream only since v19.07).

    Copy
    Copied!
                

    [spdk.git] git cherry-pick cb0c0509

  2. Configure SPDK:

    Copy
    Copied!
                

    [spdk.git] git submodule update --init [spdk.git] ./configure --prefix=/opt/mellanox/spdk --disable-tests --without-crypto --without-fio --with-vhost --without-pmdk --without-rbd --with-rdma --with-shared --with-iscsi-initiator --without-vtune [spdk.git] sed -i -e 's/CONFIG_RTE_BUILD_SHARED_LIB=n/CONFIG_RTE_BUILD_SHARED_LIB=y/g' dpdk/build/.config

    Note

    The flags --prefix, --with-rdma, and --with-shared are mandatory.

  3. Make SPDK (and DPDK libraries):

    Copy
    Copied!
                

    [spdk.git] make && make install [spdk.git] cp dpdk/build/lib/* /opt/mellanox/spdk/lib/ [spdk.git] cp dpdk/build/include/* /opt/mellanox/spdk/include/

PCIe BDF (Bus, Device, Function) is a unique identifier assigned to every PCIe device connected to a computer. By identifying each device with a unique BDF number, the computer's OS can manage the system's resources efficiently and effectively.

PCIe BDF values are determined by host OS and are hence subject to change between different runs, or even in a single run. Therefore, the BDF identifier is not the best fit for permanent configuration.

To overcome this problem, NVIDIA devices add an extension to PCIe attributes, called VUIDs. As opposed to BDF, VUID is persistent across runs which makes it useful as a PCIe function identifier.

PCI BDF and VUID can be extracted one out of the other, using lspci command:

  1. To extract VUID out of BDF:

    Copy
    Copied!
                

    [host] lspci -s <BDF> -vvv | grep -i VU | awk '{print $4}'

  2. To extract BDF out of VUID:

    Copy
    Copied!
                

    [host] ./get_bdf.py <VUID> [host] cat ./get_bdf.py #!/usr/bin/python3   import subprocess import sys   vuid = sys.argv[1]   # Split the output into individual PCI function entries lspci_output = subprocess.check_output(['lspci']).decode().strip().split('\n')   # Create an empty dictionary to store the results pci_functions = {}   # Loop through each PCI function and extract the BDF and full info for line in lspci_output: bdf = line.split()[0] if vuid in subprocess.check_output(['lspci', '-s', bdf, '-vvv']).decode(): print(bdf) exit(0)   print("Not Found")

This appendix explains how SNAP consumes memory and how to manage memory allocation.

The user must allocate the DPA hugepages memory according to the section "Step 1: Allocate Hugepages". It is possible to use use a portion of the DPU memory allocation in the SNAP container as described in section "Adjusting YAML Configuration". This configuration includes the following minimum and maximum values:

  • The minimum allocation which the SNAP container consumes:

    Copy
    Copied!
                

    resources: requests: memory: "4Gi"

  • The maximum allocation that the SNAP container is allowed to consume:

    Copy
    Copied!
                

    resources: limits:    hugepages-2Mi: "4Gi"

Hugepage memory is used by the following:

  • SPDK mem-size global variable which controls the SPDK hugepages consumption (configurable in SPDK, 1GB by default)

  • SNAP SNAP_MEMPOOL_SIZE_MB – used with non-ZC mode as IO buffers staging buffers on the Arm. By default, the SNAP mempool consumes 1G from the SPDK mem-size hugepages allocation. SNAP mempool may be configured using the SNAP_MEMPOOL_SIZE_MB global variable (minimum is 64 MB).

    Note

    If the value assigned is too low, with non-ZC, a performance degradation could be seen.

  • SNAP and SPDK internal usage – 1G should be used by default. This may be reduced depending on the overall scale (i.e., VFs/num queues/QD).

  • XLIO buffers – allocated only when NVMeTCP XLIO is enabled.

The following is the limit of the container memory allowed to be used by the SNAP container:

Copy
Copied!
            

resources: limits: memory: "6Gi"

Info

This includes the hugepages limit (in this example, additional 2G of non-hugepages memory).

The SNAP container also consumes DPU SHMEM memory when NVMe recovery is used (described in section "NVMe Recovery"). In addition, the following resources are used:

Copy
Copied!
            

limits: memory:

With Linux environment on host OS, additional kernel boot parameters may be required to support SNAP related features:

  • To use SR-IOV:

    • For Intel, intel_iommu=on iommu=pt must be added

    • For AMD, amd_iommu=on iommu=pt must be added

  • To use PCIe hotplug, pci=realloc must be added

  • modprobe.blacklist=virtio_blk,virtio_pci for non-built-in virtio-blk driver or virtio-pci driver

To view boot parameter values, run:

Copy
Copied!
            

cat /proc/cmdline

It is recommended to use the following with virtio-blk:

Copy
Copied!
            

[dpu] cat /proc/cmdline BOOT_IMAGE … pci=realloc modprobe.blacklist=virtio_blk,virtio_pci

To enable VFs (virtio_blk/NVMe):

Copy
Copied!
            

echo 125 > /sys/bus/pci/devices/0000\:27\:00.4/sriov_numvfs

Intel Server Performance Optimizations

Copy
Copied!
            

cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.15.0_mlnx root=UUID=91528e6a-b7d3-4e78-9d2e-9d5ad60e8273 ro crashkernel=auto resume=UUID=06ff0f35-0282-4812-894e-111ae8d76768 rhgb quiet iommu=pt intel_iommu=on pci=realloc modprobe.blacklist=virtio_blk,virtio_pci


AMD Server Performance Optimizations

Copy
Copied!
            

cat /proc/cmdline cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.15.0_mlnx root=UUID=91528e6a-b7d3-4e78-9d2e-9d5ad60e8273 ro crashkernel=auto resume=UUID=06ff0f35-0282-4812-894e-111ae8d76768 rhgb quiet iommu=pt amd_iommu=on pci=realloc modprobe.blacklist=virtio_blk,virtio_pci


© Copyright 2025, NVIDIA. Last updated on Aug 25, 2025.