DOCA Documentation v3.2.0

SNAP-4 Service Appendixes

Before configuring SNAP, the user must ensure that all firmware configuration requirements are met. By default, SNAP is disabled and must be enabled by running both common SNAP configurations and additional protocol-specific configurations depending on the expected usage of the application (e.g., hot-plug, SR-IOV, UEFI boot, etc).

After configuration is finished, the host must be power cycled for the changes to take effect.

Note

To verify that all configuration requirements are satisfied, users may query the current/next configuration by running the following:

Copy
Copied!
            

mlxconfig -d /dev/mst/mt41692_pciconf0 -e query

System Configuration Parameters

Parameter

Description

Possible Values

INTERNAL_CPU_MODEL

Enable BlueField to work in internal CPU model

Note

Must be set to 1 for storage emulations.

0/1

SRIOV_EN

Enable SR-IOV

0/1

PCI_SWITCH_EMULATION_ENABLE

Enable PCIe switch for emulated PFs

0/1

PCI_SWITCH_EMULATION_NUM_PORT

The maximum number of hotplug emulated PFs which equals  PCI_SWITCH_EMULATION_NUM_PORT–1. For example, if PCI_SWITCH_EMULATION_NUM_PORT=32, then the maximum number of hotplug emulated PFs would be 31.

Note

One switch port is reserved for all static PFs.

[0,2-32]

Note

SRIOV_EN is valid only for static PFs.


RDMA/RoCE Configuration

By default, BlueField's RDMA/RoCE communication is blocked for its primary OS interfaces (known as ECPFs, typically mlx5_0 and mlx5_1).

If RoCE traffic is required, you must create additional network functions (scalable functions) that support RDMA/RoCE.

Note

This configuration is not required when working over TCP or RDMA/IB.

To enable RoCE interfaces, run the following from within the DPU:

Copy
Copied!
            

[dpu] mlxconfig -d /dev/mst/mt41692_pciconf0 s PER_PF_NUM_SF=1 [dpu] mlxconfig -d /dev/mst/mt41692_pciconf0 s PF_SF_BAR_SIZE=8 PF_TOTAL_SF=2 [dpu] mlxconfig -d /dev/mst/mt41692_pciconf0.1 s PF_SF_BAR_SIZE=8 PF_TOTAL_SF=2


NVMe Configuration

Parameter

Description

Possible Values

NVME_EMULATION_ENABLE

Enable NVMe device emulation

0/1

NVME_EMULATION_NUM_PF

Number of static emulated NVMe PFs

[0-2]

NVME_EMULATION_NUM_MSIX

Number of MSIX assigned to emulated NVMe PFs

Note

Firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the nvme_controller_list RPC command.

[0-63]

NVME_EMULATION_NUM_VF_MSIX

Number of MSIX per emulated NVMe VF

Note

Firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the nvme_controller_list RPC command.

Note

This value should match the maximum number of queues assigned to a VF's NVMe SNAP controller through the nvme_controller_create num_queues parameter, as each queue requires one MSIX interrupt.

[0-4095]

NVME_EMULATION_NUM_VF

Number of VFs per emulated NVMe PF

Note

If not 0, overrides NUM_OF_VFS; valid only when SRIOV_EN=1.

[0-256]

EXP_ROM_NVME_UEFI_x86_ENABLE

Enable NVMe UEFI exprom driver

Note

Used for UEFI boot process.

0/1

NVME_EMULATION_MAX_QUEUE_DEPTH

Defines the default maximum queue depth for NVMe I/O queues. The value should be set to the binary logarithm of the desired maximum queue size.

  • A value of 0 (default) limits the queue size to 64.

  • The recommended value is 12, which allows for a queue size of 4096.

[0-12]


Virtio-blk Configuration

Warning

Due to virtio-blk protocol limitations, using bad configuration while working with static virtio-blk PFs may cause the host server OS to fail on boot.

Before continuing, make sure you have configured:

  • A working channel to access Arm even when the host is shut down. Setting such channel is out of the scope of this document. Please refer to NVIDIA BlueField DPU BSP documentation for more details.

  • Add the following line to /etc/nvda_snap/snap_rpc_init.conf:

    Copy
    Copied!
                

    virtio_blk_controller_create –pf_id 0

    For more information, please refer to section "Virtio-blk Emulation Management".

Parameter

Description

Possible Values

VIRTIO_BLK_EMULATION_ENABLE

Enable virtio-blk device emulation

0/1

VIRTIO_BLK_EMULATION_NUM_PF

Number of static emulated virtio-blk PFs

Note

See warning above.

[0-4]

VIRTIO_BLK_EMULATION_NUM_MSIX

Number of MSIX assigned to emulated virtio-blk PFs

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the virtio_blk_controller_list RPC command.

[0-63]

VIRTIO_BLK_EMULATION_NUM_VF_MSIX

Number of MSIX per emulated virtio-blk VF

Note

The firmware treats this value as a best effort value. The effective number of MSI-X given to the function should be queried as part of the virtio_blk_controller_list RPC command.

Note

This value should match the maximum number of queues assigned to a VF's NVMe SNAP controller through the nvme_controller_create num_queues parameter, as each queue requires one MSIX interrupt.

[0-4095]

VIRTIO_BLK_EMULATION_NUM_VF

Number of VFs per emulated virtio-blk PF

Note

If not 0, overrides NUM_OF_VFS; valid only when SRIOV_EN=1

[0-2000]

EXP_ROM_VIRTIO_BLK_UEFI_x86_ENABLE

Enable virtio-blk UEFI exprom driver

Note

Used for UEFI boot process.

0/1


To configure persistent network interfaces so they are not lost after reboot. Under /etc/sysconfig/network-scripts modify the following four files, or create them if do not exist, then perform a reboot:

Copy
Copied!
            

# cd /etc/sysconfig/network-scripts/ # cat ifcfg-p0 NAME="p0" DEVICE="p0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="none" TYPE=Ethernet MTU=9000   # cat ifcfg-p1 NAME="p1" DEVICE="p1" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="none" TYPE=Ethernet MTU=9000   # cat ifcfg-enp3s0f0s0 NAME="enp3s0f0s0" DEVICE="enp3s0f0s0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="static" TYPE=Ethernet IPADDR=1.1.1.1 PREFIX=24 MTU=9000   # cat ifcfg-enp3s0f1s0 NAME="enp3s0f1s0" DEVICE="enp3s0f1s0" NM_CONTROLLED="no" DEVTIMEOUT=30 PEERDNS="no" ONBOOT="yes" BOOTPROTO="static" TYPE=Ethernet IPADDR=1.1.1.2 PREFIX=24 MTU=9000

The SNAP source package contains the files necessary for building a container with a custom SPDK.

To build the container:

  1. Download and install the SNAP sources package:

    Copy
    Copied!
                

    [dpu] # dpkg -i /path/snap-sources_<version>_arm64.deb

  2. Navigate to the src folder and use it as the development environment:

    Copy
    Copied!
                

    [dpu] # cd /opt/nvidia/nvda_snap/src

  3. Copy the following to the container folder:

    • SNAP source package – required for installing SNAP inside the container

    • Custom SPDK – to container/spdk. For example:

      Copy
      Copied!
                  

      [dpu] # cp /path/snap-sources_<version>_arm64.deb container/ [dpu] # git clone -b v23.01.1 --single-branch --depth 1 --recursive --shallow-submodules https://github.com/spdk/spdk.git container/spdk

  4. Modify the spdk.sh file if necessary as it is used to compile SDPK.

  5. To build the container:

    • For Ubuntu, run:

      Copy
      Copied!
                  

      [dpu] # ./container/build_public.sh --snap-pkg-file=snap-sources_<version>_arm64.deb

    • For CentOS, run:

      Copy
      Copied!
                  

      [dpu] # rpm -i snap-sources-<version>.el8.aarch64.rpm [dpu] # cd /opt/nvidia/nvda_snap/src/ [dpu] # cp /path/snap-sources_<version>_arm64.deb container/ [dpu] # git clone -b v23.01.1 --single-branch --depth 1 --recursive --shallow-submodules https://github.com/spdk/spdk.git container/spdk [dpu] # yum install docker-ce docker-ce-cli [dpu] # ./container/build_public.sh --snap-pkg-file=snap-sources_<version>_arm64.deb

  6. Transfer the created image from the Docker tool to the crictl tool. Run:

    Copy
    Copied!
                

    [dpu] # docker save doca_snap:<version> doca_snap.tar [dpu] # ctr -n=k8s.io images import doca_snap.tar

    Note

    To transfer the container image to other setups, refer to appendix "Deploying Container on Setups Without Internet Connectivity".

  7. To verify the image, run:

    Copy
    Copied!
                

    [DPU] # crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/doca_snap <version> 79c503f0a2bd7 284MB

  8. Edit the image filed in the container/doca_snap.yaml file. Run:

    Copy
    Copied!
                

    image: doca_snap:<version>

  9. Use the YAML file to deploy the container. Run:

    Copy
    Copied!
                

    [dpu] # cp doca_snap.yaml /etc/kubelet.d/

    Note

    The container deployment preparation steps are required.

When Internet connectivity is not available on a DPU, Kubelet scans for the container image locally upon detecting the SNAP YAML. Users can load the container image manually before the deployment.

To accomplish this, users must download the necessary resources using a DPU with Internet connectivity and subsequently transfer and load them onto DPUs that lack Internet connectivity.

  1. To download the .yaml file:

    Copy
    Copied!
                

    [bf] # wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/doca/doca_container_configs/versions/<path-to-yaml>/doca_snap.yaml

    Note

    Access the latest download command on NGC. The doca_snap:4.1.0-doca2.0.2 tag is used in this section as an example, and the latest tag is also available on NGC.

  2. To download SNAP container image:

    Copy
    Copied!
                

    [bf] # crictl pull nvcr.io/nvidia/doca/doca_snap:4.1.0-doca2.0.2

  3. To verify that the SNAP container image exists:

    Copy
    Copied!
                

    [bf] # crictl images   IMAGE TAG IMAGE ID SIZE nvcr.io/nvidia/doca/doca_snap 4.1.0-doca2.0.2 9d941b5994057 267MB k8s.gcr.io/pause 3.2 2a060e2e7101d 251kB

    Note

    k8s.gcr.io/pause image is required for the SNAP container.

  4. To save the images as a .tar file:

    Copy
    Copied!
                

    [bf] # mkdir images [bf] # ctr -n=k8s.io image export images/snap_container_image.tar nvcr.io/nvidia/doca/doca_snap:4.1.0-doca2.0.2 [bf] # ctr -n=k8s.io image export images/pause_image.tar k8s.gcr.io/pause:3.2

  5. Transfer the .tar files and run the following to load them into Kubelet:

    Copy
    Copied!
                

    [bf] # sudo ctr --namespace k8s.io image import images/snap_container_image.tar [bf] # sudo ctr --namespace k8s.io image import images/pause_image.tar

  6. Now, the image exists in the tool and is ready for deployment.

    Copy
    Copied!
                

    [bf] # crictl images IMAGE TAG IMAGE ID SIZE nvcr.io/nvidia/doca/doca_snap 4.1.0-doca2.0.2 9d941b5994057 267MB k8s.gcr.io/pause 3.2 2a060e2e7101d 251kB

To build SPDK-19.04 for SNAP integration:

  1. Cherry-pick a critical fix for SPDK shared libraries installation (originally applied on upstream only since v19.07).

    Copy
    Copied!
                

    [spdk.git] git cherry-pick cb0c0509

  2. Configure SPDK:

    Copy
    Copied!
                

    [spdk.git] git submodule update --init [spdk.git] ./configure --prefix=/opt/mellanox/spdk --disable-tests --without-crypto --without-fio --with-vhost --without-pmdk --without-rbd --with-rdma --with-shared --with-iscsi-initiator --without-vtune [spdk.git] sed -i -e 's/CONFIG_RTE_BUILD_SHARED_LIB=n/CONFIG_RTE_BUILD_SHARED_LIB=y/g' dpdk/build/.config

    Note

    The flags --prefix, --with-rdma, and --with-shared are mandatory.

  3. Make SPDK (and DPDK libraries):

    Copy
    Copied!
                

    [spdk.git] make && make install [spdk.git] cp dpdk/build/lib/* /opt/mellanox/spdk/lib/ [spdk.git] cp dpdk/build/include/* /opt/mellanox/spdk/include/

PCIe BDF (Bus, Device, Function) is a unique identifier assigned to every PCIe device connected to a computer. By identifying each device with a unique BDF number, the computer's OS can manage the system's resources efficiently and effectively.

PCIe BDF values are determined by host OS and are hence subject to change between different runs, or even in a single run. Therefore, the BDF identifier is not the best fit for permanent configuration.

To overcome this problem, NVIDIA devices add an extension to PCIe attributes, called VUIDs. As opposed to BDF, VUID is persistent across runs which makes it useful as a PCIe function identifier.

PCI BDF and VUID can be extracted one out of the other, using lspci command:

  1. To extract VUID out of BDF:

    Copy
    Copied!
                

    [host] lspci -s <BDF> -vvv | grep -i VU | awk '{print $4}'

  2. To extract BDF out of VUID:

    Copy
    Copied!
                

    [host] ./get_bdf.py <VUID> [host] cat ./get_bdf.py #!/usr/bin/python3   import subprocess import sys   vuid = sys.argv[1]   # Split the output into individual PCI function entries lspci_output = subprocess.check_output(['lspci']).decode().strip().split('\n')   # Create an empty dictionary to store the results pci_functions = {}   # Loop through each PCI function and extract the BDF and full info for line in lspci_output: bdf = line.split()[0] if vuid in subprocess.check_output(['lspci', '-s', bdf, '-vvv']).decode(): print(bdf) exit(0)   print("Not Found")

This appendix explains how SNAP consumes memory and how to manage memory allocation.

The user must allocate the DPA hugepages memory according to the section "Step 1: Allocate Hugepages". It is possible to use use a portion of the DPU memory allocation in the SNAP container as described in section "Adjusting YAML Configuration". This configuration includes the following minimum and maximum values:

  • The minimum allocation which the SNAP container consumes:

    Copy
    Copied!
                

    resources: requests: memory: "4Gi"

  • The maximum allocation that the SNAP container is allowed to consume:

    Copy
    Copied!
                

    resources: limits:    hugepages-2Mi: "4Gi"

Hugepage memory is used by the following:

  • SPDK mem-size global variable which controls the SPDK hugepages consumption (configurable in SPDK, 1GB by default)

  • SNAP SNAP_MEMPOOL_SIZE_MB – used with non-ZC mode as IO buffers staging buffers on the Arm. By default, the SNAP mempool consumes 1G from the SPDK mem-size hugepages allocation. SNAP mempool may be configured using the SNAP_MEMPOOL_SIZE_MB global variable (minimum is 64 MB).

    Note

    If the value assigned is too low, with non-ZC, a performance degradation could be seen.

  • SNAP and SPDK internal usage – 1G should be used by default. This may be reduced depending on the overall scale (i.e., VFs/num queues/QD).

  • XLIO buffers – allocated only when NVMeTCP XLIO is enabled.

The following is the limit of the container memory allowed to be used by the SNAP container:

Copy
Copied!
            

resources: limits: memory: "6Gi"

Info

This includes the hugepages limit (in this example, additional 2G of non-hugepages memory).

The SNAP container also consumes DPU SHMEM memory when NVMe recovery is used (described in section "NVMe Recovery"). In addition, the following resources are used:

Copy
Copied!
            

limits: memory:

With Linux environment on host OS, additional kernel boot parameters may be required to support SNAP related features:

  • To use SR-IOV:

    • For Intel, intel_iommu=on iommu=pt must be added

    • For AMD, amd_iommu=on iommu=pt must be added

  • To use PCIe hotplug, pci=realloc must be added

  • modprobe.blacklist=virtio_blk,virtio_pci for non-built-in virtio-blk driver or virtio-pci driver

To view boot parameter values, run:

Copy
Copied!
            

cat /proc/cmdline

It is recommended to use the following with virtio-blk:

Copy
Copied!
            

[dpu] cat /proc/cmdline BOOT_IMAGE … pci=realloc modprobe.blacklist=virtio_blk,virtio_pci

To enable VFs (virtio_blk/NVMe):

Copy
Copied!
            

echo 125 > /sys/bus/pci/devices/0000\:27\:00.4/sriov_numvfs

Intel Server Performance Optimizations

Copy
Copied!
            

cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.15.0_mlnx root=UUID=91528e6a-b7d3-4e78-9d2e-9d5ad60e8273 ro crashkernel=auto resume=UUID=06ff0f35-0282-4812-894e-111ae8d76768 rhgb quiet iommu=pt intel_iommu=on pci=realloc modprobe.blacklist=virtio_blk,virtio_pci


AMD Server Performance Optimizations

Copy
Copied!
            

cat /proc/cmdline cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.15.0_mlnx root=UUID=91528e6a-b7d3-4e78-9d2e-9d5ad60e8273 ro crashkernel=auto resume=UUID=06ff0f35-0282-4812-894e-111ae8d76768 rhgb quiet iommu=pt amd_iommu=on pci=realloc modprobe.blacklist=virtio_blk,virtio_pci


The SPDK bdev_nvme module utilizing RDMA transport does not support Completion Queue (CQ) resizing.

This constraint limits the number of SPDK NVMe RDMA IO queues that can be created per core. Consequently, this restricts the maximum number of supported PCI functions (including VFs, static PFs, and hot-plug PFs).

The actual limit depends on several factors, including the maximum queue depth configured on the NVMf RDMA target and the number of paths to the target. For example, with a max queue depth of 128 (the default for SPDK NVMe-oF targets) and a single path per NVMe-oF subsystem, the default CQ size supports only up to 16 IO queues per core.

To support larger-scale deployments, the bdev_nvme module must be configured to use a Shared Receive Queue (SRQ). This allows multiple connections to share receive buffers, significantly increasing scalability.

Enable SRQ by setting the rdma-srq-size option via the bdev_nvme_set_options RPC:

Copy
Copied!
            

spdk_rpc.py bdev_nvme_set_options --rdma-srq-size <RDMA_SRQ_SIZE>

The SRQ size must be sufficiently large to minimize Receiver Not Ready (RNR) events. If the size is too small, RNR events will occur, leading to performance degradation. The optimal value depends on the specific traffic profile of the workload.

Parameter

Value / Limit

Notes

Minimum Recommended

4096

Setting the size smaller than 4096 is not recommended.

Maximum Supported

32767

The hard limit for the SRQ size.

Disable SRQ

0

Setting the size to 0 disables SRQ usage.

This section explains how to extend the SNAP service with custom SPDK Bdev modules. The guide provides two distinct workflows:

  1. Production Build: Creating a new container image that permanently includes the custom module.

  2. Development Build: Manually building and running inside a container for rapid iteration and testing.

Note

The examples provided below utilize the external passthru bdev module included with SPDK (spdk/test/external_code/passthru). You should adapt these paths and commands to suit your specific custom module and deployment environment.

Method 1: Building a Production Container (Recommended)

This method generates a standalone container image with the SNAP service and your custom functionality built-in, making it suitable for deployment via Kubernetes.

Environment Preparation

Create a working directory and clone the SPDK source code corresponding to your SNAP version.

Copy
Copied!
            

# Create working directory (example: /opt/build) mkdir -p /opt/build cd /opt/build   # Clone SPDK sources git clone https://github.com/Mellanox/spdk --branch v25.05.2.nvda   # Verify directory structure ls -F # Output should show: Dockerfile spdk/


Create the Dockerfile

Create a Dockerfile in your working directory with the following content. This multi-stage build compiles the module and then copies it into the final SNAP image.

Copy
Copied!
            

# Stage 1: Builder FROM nvcr.io/nvstaging/doca/doca_snap:4.9.0-6-doca3.2.0 as builder   # Install build dependencies RUN apt-get update && apt-get install -y \ autoconf libtool python3-pyelftools libaio-dev \ libncurses-dev libfuse3-dev patchelf libcmocka-dev make   # Copy external code source COPY spdk/test/external_code /external_code   # Build the custom module WORKDIR /external_code ENV SPDK_HEADER_DIR=/opt/mellanox/spdk/include ENV SPDK_LIB_DIR=/opt/mellanox/spdk/lib ENV DPDK_LIB_DIR=/opt/mellanox/spdk/include   RUN make passthru_shared   # Stage 2: Final Image FROM nvcr.io/nvstaging/doca/doca_snap:4.9.0-6-doca3.2.0   # Copy compiled shared object COPY --from=builder /external_code/passthru/libpassthru_external.so /opt/nvidia/spdk/lib/ ENV SNAP_LD_PRELOAD=/opt/nvidia/spdk/lib/libpassthru_external.so   # Copy RPC python script COPY --from=builder /external_code/passthru/bdev_passthru.py /usr/lib/python3/dist-packages/spdk/rpc/ ENV SPDK_RPC_PLUGIN="spdk.rpc.bdev_passthru"


Build and Deploy

  1. Build the container image:

    Copy
    Copied!
                

    docker build -t doca_snap_custom_bdev:latest -f Dockerfile .

  2. Import to Kubernetes (Kubelet):

    Copy
    Copied!
                

    # Export image to tarball docker save doca_snap_custom_bdev:latest > doca_snap_custom_bdev.tar   # Import into containerd ctr -n=k8s.io images import doca_snap_custom_bdev.tar

  3. Edit /etc/kubelet.d/doca_snap.yaml to use the new image:

    Copy
    Copied!
                

    image: doca_snap_custom_bdev:latest

Configuration Example

Once the container is running, you can configure the custom bdev module via RPC.

Copy
Copied!
            

# Identify the running SNAP container ID SNAP_ID=$(crictl ps -s running -q --name snap)   # Create a base Malloc bdev crictl exec $SNAP_ID spdk_rpc.py bdev_malloc_create 32 4096 -b Malloc0   # Construct the custom passthru bdev on top of Malloc0 crictl exec $SNAP_ID spdk_rpc.py construct_ext_passthru_bdev -b Malloc0 -p TestPT   # Create NVMe Subsystem and Controller crictl exec $SNAP_ID snap_rpc.py nvme_subsystem_create --nqn nqn.2023-05.io.nvda.nvme:0 crictl exec $SNAP_ID snap_rpc.py nvme_controller_create --nqn nqn.2023-05.io.nvda.nvme:0 --pf_id 0 --ctrl NVMeCtrl0 --suspended   # Create Namespace using the custom bdev (TestPT) crictl exec $SNAP_ID snap_rpc.py nvme_namespace_create --nqn nqn.2023-05.io.nvda.nvme:0 --bdev_name TestPT --nsid 1   # Attach Namespace and Resume Controller crictl exec $SNAP_ID snap_rpc.py nvme_controller_attach_ns --ctrl NVMeCtrl0 --nsid 1 crictl exec $SNAP_ID snap_rpc.py nvme_controller_resume --ctrl NVMeCtrl0

Manual Development Workflow

This approach sets up an interactive development container, ideal for workflows requiring frequent code changes, recompilation, and testing.

Environment Preparation

Prepare the working directory and clone sources (same as the Production method).

Copy
Copied!
            

mkdir -p /opt/build cd /opt/build git clone https://github.com/Mellanox/spdk --branch v25.05.2.nvda


Create Development Dockerfile

Create a simplified Dockerfile that installs dependencies and provides a shell.

Copy
Copied!
            

FROM nvcr.io/nvstaging/doca/doca_snap:4.9.0-6-doca3.2.0 as builder   RUN apt-get update && apt-get install -y \ autoconf libtool python3-pyelftools libaio-dev \ libncurses-dev libfuse3-dev patchelf libcmocka-dev make   ENTRYPOINT /bin/bash


Build and Run Interactive Container

  1. Build the dev image:

    Copy
    Copied!
                

    docker build -t doca_snap_custom_bdev_dev:latest -f Dockerfile .

  2. Run interactively. Mount the necessary system volumes and your source code directory:

    Copy
    Copied!
                

    docker run -ti --privileged --net=host \ --volume /dev/hugepages:/dev/hugepages \ --volume /dev/shm:/dev/shm \ --volume /dev/infiniband:/dev/infiniband \ --volume /etc/nvda_snap:/etc/nvda_snap \ --volume ${PWD}/spdk/test/external_code:/external_code \ doca_snap_custom_bdev_dev:latest

Build and Test Inside Container

Once inside the container shell, perform the following steps:

  1. Compile the module:

    Copy
    Copied!
                

    cd /external_code/ export SPDK_HEADER_DIR=/opt/mellanox/spdk/include export SPDK_LIB_DIR=/opt/mellanox/spdk/lib export DPDK_LIB_DIR=/opt/mellanox/spdk/include   make passthru_shared

  2. Launch the service manually, preloading your compiled shared object:

    Copy
    Copied!
                

    # Copy RPC definition cp /external_code/passthru/bdev_passthru.py /usr/bin/   # Load environment variables source /opt/nvidia/nvda_snap/bin/set_environment_variables.sh   # Start Service with LD_PRELOAD LD_PRELOAD=/external_code/passthru/libpassthru_external.so \ /opt/nvidia/nvda_snap/bin/snap_service -m 0xff &

  3. Configure and test:

    Copy
    Copied!
                

    # Create base Malloc bdev spdk_rpc.py bdev_malloc_create 32 4096 -b Malloc0   # Create custom Passthru bdev (explicitly specifying plugin) spdk_rpc.py --plugin bdev_passthru construct_ext_passthru_bdev -b Malloc0 -p TestPT   # Setup NVMe Controller logic snap_rpc.py nvme_subsystem_create --nqn nqn.2023-05.io.nvda.nvme:0 snap_rpc.py nvme_controller_create --nqn nqn.2023-05.io.nvda.nvme:0 --pf_id 0 --ctrl NVMeCtrl0 --suspended snap_rpc.py nvme_namespace_create --nqn nqn.2023-05.io.nvda.nvme:0 --bdev_name TestPT --nsid 1 snap_rpc.py nvme_controller_attach_ns --ctrl NVMeCtrl0 --nsid 1 snap_rpc.py nvme_controller_resume --ctrl NVMeCtrl0

© Copyright 2025, NVIDIA. Last updated on Nov 20, 2025