What can I help you with?
DOCA Documentation v3.0.0

DPL Runtime Service

This page explains how to deploy, configure, and operate the DPL Runtime Service on NVIDIA BlueField DPUs, enabling runtime management of data plane pipelines compiled using the DOCA Pipeline Language (DPL).

The DPL Runtime Service is responsible for programming and managing the DPU datapath at runtime. It enables dynamic rule insertion, packet monitoring, and forwarding logic through standardized and proprietary APIs. The service is divided into four core components:

Component

Description

Runtime core

Main manager that handles requests from all interfaces and programs hardware resources

P4Runtime server

gRPC-based server implementing the P4Runtime 1.3.0 API for remote pipeline configuration and table control

Nspect server

Provides runtime visibility and debugging through the dpl_nspect client

Admin server

Offers administrative and configuration control over the daemon via gRPC

High-level system illustration:

Untitled_Diagram-1736851997812-version-1-modificationdate-1745311011310-api-v2.png

Supported Platforms

DPL is supported on the following NVIDIA BlueField DPUs and SuperNICs:

NVIDIA SKU

Legacy OPN

PSID

Description

900-9D3B6-00CV-AA0

N/A

MT_0000000884

BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled

900-9D3B6-00SV-AA0

N/A

MT_0000000965

BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Disabled

900-9D3B6-00CC-AA0

N/A

MT_0000001024

BlueField-3 B3210 P-Series FHHL DPU; 100GbE (default mode) / HDR100 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled

900-9D3B6-00SC-AA0

N/A

MT_0000001025

BlueField-3 B3210 P-Series FHHL DPU; 100GbE (default mode) / HDR100 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Disabled

The following NVIDIA DPUs are supported with DOCA on the host:

For the the system requirements, see DPU's hardware user guide .

Component

Minimum Version

Ubuntu OS

22.04

DOCA BFB Bundle

3.0.0-xx

Firmware

32.45.1002+

DOCA Networking

3.0.0+

Firmware is included in the BFB bundle available via the NVIDIA DevZone. Use the bfb-install command to install the bundle:

Copy
Copied!
            

bfb-install --bfb bf-bundle-3.0.0-102_25.04_ubuntu-22.04_prod.bfb --rshim rshim0

Helpful resources:

The following firmware configuration are required for running the DPL Runtime Service, and are enabled by the dpl_dpu_setup.sh script documented below.

Copy
Copied!
            

FLEX_PARSER_PROFILE_ENABLE=4 PROG_PARSE_GRAPH=true SRIOV_EN=1 # Only if VFs are used

Prerequisites:

  • BlueField must be in DPU Mode.

  • SR-IOV or SFs must be enabled and created (on the host side).

  • DOCA must be installed on the host and BlueField.

Installation:

  1. Download and extract the container bundle from NGC.

    Info

    Refer to DPL Container Deployment for more information.

  2. Run the setup script:

    Copy
    Copied!
                

    cd dpl_rt_service_<version>/scripts ./dpl_dpu_setup.sh

    Info

    This script:

    • Configures necessary mlxconfig settings

    • Creates a directory structure for config files

    • Enables SR-IOV virtual function support

Configuration Files

Before launching the container, you must create configuration files on the DPU filesystem. These define:

  • Interface names and PCI IDs.

  • Allowed ports and forwarding configuration.

  • Resource constraints and debug options.

Mount the config path into the container using Docker or Kubernetes volume mounts. Refer to DPL Service Configuration for details.

The following table lists the possible ingress and egress ports for a given packet that is processed by a BlueField pipeline (DPU mode):

Ingress Port

Egress Port

Wire Port P0

pf0hpf

pf0vf_n

pf0vf_m

Wire port P0

Allowed

Allowed

Allowed

Allowed

pf0hpf

Allowed

Disabled

Allowed

Allowed

pf0vf_n

Allowed

Allowed

Disabled

Allowed

pf0vf_m

Allowed

Allowed

Allowed

Disabled

Info

Anything that is allowed with SR-IOV VFs in the table above is also allowed with scalable functions (SFs).

Multiport eSwitch Mode

In multiport eSwitch mode, all uplinks and VFs/SFs representors of all physical ports are managed by the same hardware e-switch. This allows for traffic forwarding between the physical ports, such as from P0 to P1, when using a device with multiple physical ports.

To use two uplink/wire ports on a single DPL device, you must enable 'multiport eSwitch' (esw_multiport) mode before starting the DPL RTD service:

  1. Make sure that LAG_RESOURCE_ALLOCATION is enabled in the firmware configurations.

    For example:

    Copy
    Copied!
                

    sudo mlxconfig -d 0000:03:00.0 s LAG_RESOURCE_ALLOCATION=1

    Note

    For more details on how to query and adjust firmware configurations , refer to Using mlxconfig.

    A power-cycle or firmware reset is required in order for such change to take effect. Refer to mlxfwreset for more information.

  2. Once the system is up with LAG_RESOURCE_ALLOCATION enabled, the esw_multiport mode can be enabled using the devlink command.

    For example:

    Copy
    Copied!
                

    sudo devlink dev param set pci/0000:03:00.0 name esw_multiport value 1 cmode runtime

    Note

    devlink settings are not persistent across system reboots.

Refer to the DPL Installation Guide documentation.

Advanced use cases may require custom controllers. These should implement the P4Runtime 1.3.0 gRPC API and connect to the DPL Runtime Server on TCP port 9559.

Resources:

  • Protobufs: p4lang/p4runtime GitHub

  • Installed inside container: /opt/mellanox/third_party/dpl_rt_service/p4runtime/

  • Compile using standard gRPC + protobuf toolchain (e.g., C++)

Checkpoint

Command

View kubelet logs

sudo journalctl -u kubelet --since -5m

List pulled images

sudo crictl images

List created pods

sudo crictl pods

List running containers

sudo crictl ps

View DPL logs

/var/log/doca/dpl_rt_service/dpl_rtd.log

Also verify that:

  • VFs were created before deploying container

  • Configuration files exist, are correctly named, and contain valid device IDs

  • Interface names and MTUs match hardware

  • Firmware is up-to-date

  • DPU is in the correct mode (check mlxconfig)

© Copyright 2025, NVIDIA. Last updated on May 5, 2025.