DPL Runtime Service
This page explains how to deploy, configure, and operate the DPL Runtime Service on NVIDIA BlueField DPUs, enabling runtime management of data plane pipelines compiled using the DOCA Pipeline Language (DPL).
The DPL Runtime Service is responsible for programming and managing the DPU datapath at runtime. It enables dynamic rule insertion, packet monitoring, and forwarding logic through standardized and proprietary APIs. The service is divided into four core components:
Component | Description |
Runtime core | Main manager that handles requests from all interfaces and programs hardware resources |
P4Runtime server | gRPC-based server implementing the P4Runtime 1.3.0 API for remote pipeline configuration and table control |
Nspect server | Provides runtime visibility and debugging through the |
Admin server | Offers administrative and configuration control over the daemon via gRPC |
High-level system illustration:

Supported Platforms
DPL is supported on the following NVIDIA BlueField DPUs and SuperNICs:
NVIDIA SKU | Legacy OPN | PSID | Description |
900-9D3B6-00CV-AA0 | N/A | MT_0000000884 | BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled |
900-9D3B6-00SV-AA0 | N/A | MT_0000000965 | BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Disabled |
900-9D3B6-00CC-AA0 | N/A | MT_0000001024 | BlueField-3 B3210 P-Series FHHL DPU; 100GbE (default mode) / HDR100 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled |
900-9D3B6-00SC-AA0 | N/A | MT_0000001025 | BlueField-3 B3210 P-Series FHHL DPU; 100GbE (default mode) / HDR100 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Disabled |
The following NVIDIA DPUs are supported with DOCA on the host:
For the the system requirements, see DPU's hardware user guide .
Component | Minimum Version |
Ubuntu OS | 22.04 |
DOCA BFB Bundle | 3.0.0-xx |
Firmware | 32.45.1002+ |
DOCA Networking | 3.0.0+ |
Firmware is included in the BFB bundle available via the NVIDIA DevZone. Use the bfb-install
command to install the bundle:
bfb-install
--bfb bf-bundle-3.0.0-102_25.04_ubuntu-22.04_prod.bfb --rshim rshim0
Helpful resources:
The following firmware configuration are required for running the DPL Runtime Service, and are enabled by the dpl_dpu_setup.sh
script documented below.
FLEX_PARSER_PROFILE_ENABLE=4
PROG_PARSE_GRAPH=true
SRIOV_EN=1 # Only if VFs are used
Prerequisites:
BlueField must be in DPU Mode.
SR-IOV or SFs must be enabled and created (on the host side).
DOCA must be installed on the host and BlueField.
Installation:
Download and extract the container bundle from NGC.
InfoRefer to DPL Container Deployment for more information.
Run the setup script:
cd
dpl_rt_service_<version>/scripts ./dpl_dpu_setup.shInfoThis script:
Configures necessary
mlxconfig
settingsCreates a directory structure for config files
Enables SR-IOV virtual function support
Configuration Files
Before launching the container, you must create configuration files on the DPU filesystem. These define:
Interface names and PCI IDs.
Allowed ports and forwarding configuration.
Resource constraints and debug options.
Mount the config path into the container using Docker or Kubernetes volume mounts. Refer to DPL Service Configuration for details.
The following table lists the possible ingress and egress ports for a given packet that is processed by a BlueField pipeline (DPU mode):
Ingress Port | Egress Port | |||
Wire Port P0 |
|
|
| |
Wire port P0 | Allowed | Allowed | Allowed | Allowed |
| Allowed | Disabled | Allowed | Allowed |
| Allowed | Allowed | Disabled | Allowed |
| Allowed | Allowed | Allowed | Disabled |
Anything that is allowed with SR-IOV VFs in the table above is also allowed with scalable functions (SFs).
Multiport eSwitch Mode
In multiport eSwitch mode, all uplinks and VFs/SFs representors of all physical ports are managed by the same hardware e-switch. This allows for traffic forwarding between the physical ports, such as from P0 to P1, when using a device with multiple physical ports.
To use two uplink/wire ports on a single DPL device, you must enable 'multiport eSwitch' (esw_multiport
) mode before starting the DPL RTD service:
Make sure that
LAG_RESOURCE_ALLOCATION
is enabled in the firmware configurations.For example:
sudo mlxconfig -d 0000:03:00.0 s LAG_RESOURCE_ALLOCATION=1
NoteFor more details on how to query and adjust firmware configurations , refer to Using mlxconfig.
A power-cycle or firmware reset is required in order for such change to take effect. Refer to mlxfwreset for more information.
Once the system is up with
LAG_RESOURCE_ALLOCATION
enabled, theesw_multiport
mode can be enabled using thedevlink
command.For example:
sudo devlink dev param set pci/0000:03:00.0 name esw_multiport value 1 cmode runtime
Notedevlink
settings are not persistent across system reboots.
Refer to the DPL Installation Guide documentation.
Advanced use cases may require custom controllers. These should implement the P4Runtime 1.3.0 gRPC API and connect to the DPL Runtime Server on TCP port 9559.
Resources:
Protobufs: p4lang/p4runtime GitHub
Installed inside container:
/opt/mellanox/third_party/dpl_rt_service/p4runtime/
Compile using standard gRPC + protobuf toolchain (e.g., C++)
Checkpoint | Command |
View kubelet logs |
|
List pulled images |
|
List created pods |
|
List running containers |
|
View DPL logs |
|
Also verify that:
VFs were created before deploying container
Configuration files exist, are correctly named, and contain valid device IDs
Interface names and MTUs match hardware
Firmware is up-to-date
DPU is in the correct mode (check
mlxconfig
)