DOCA Documentation v3.2.0

DPL Service Configuration

The DPL Runtime Service utilizes three distinct types of configuration files. These files adhere to a simplified INI-style format, which supports repeated sections (e.g., [INTERFACE]) and uses # for comments rather than ;.

The available configuration files are as follows:

  • DPL Device Configuration: Located at /etc/dpl_rt_service/devices.d/<device-id>.conf.

  • Service Daemon Configuration: Located at /etc/dpl_rt_service/dpl_rt.conf.

  • System Configuration: Located at /etc/dpl_rt_service/system.conf.

The dpl_dpu_setup.sh script installs default (recommended) daemon and system configuration files. However, users must create their own DPL Device configuration file based on the provided template, which is also installed by the setup script.

Note

Any changes made to files within the /etc/dpl_rt_service/ directory require a restart of the DPL Runtime Service Container to take effect. Please refer to the section " Restarting the DPL Runtime Service After Configuration Changes " for instructions.

Path: /etc/dpl_rt_service/devices.d/<device-id>.conf (e.g., /etc/dpl_rt_service/devices.d/1000.conf).

This file defines the device, its interfaces, and the mapping to DPL Port IDs used in DPL programs. A configuration template is available at /etc/dpl_rt_service/devices.d/NAME.conf.template.

Note

When using SFs or SR-IOV VFs, ensure the configuration references their representor interfaces, not the SF/VF interfaces themselves.

DPL Port ID Mapping

Each physical or virtual interface is assigned a logical DPL Port ID (dpl_logical_port_id). Consistency between the DPL program and this configuration is mandatory.

Supported interface types include:

  • Uplink netdev interface (e.g., p0).

  • PF representor (e.g., pf0hpf).

  • VF representor (e.g., pf0vf0).

  • SF representor (e.g., en3f0pf0sf1).

Unless multiport eswitch is enabled, all listed interfaces must belong to the same uplink port.

DPL Port ID Rules

The following rules apply to dpl_device_id and dpl_logical_port_id values:

Condition

Requirement

Reserved value

UINT32_MAX

DPL Device ID

Must be a positive integer

DPL Interface ID

Integer between 0 and UINT32_MAX

Uplink ports per config

Exactly one (unless multiport eswitch is enabled)


Example DPL Device Configuration File

The following is an example of a valid device configuration file.

Copy
Copied!
            

# [DEVICE] section: Must appear only once. [DEVICE] # The DPL Device ID used by the controller to manage tables. dpl_device_id=1000 # Cache counter timeout to decrease HW accesses. dpl_counter_cache_timeout=0 # Counter polling interval for idle-timeout (in seconds). idle_timeout_polling_interval=2 # Polling interval for TCP state objects from HW (in milliseconds). tcp_state_polling_interval_msec=200   # [P4_RT_CONTROLLER] section: Must appear only once. [P4_RT_CONTROLLER] # DPL Port ID for traffic originating from the controller. p4_controller_port_id=9876   # [INTERFACE] section: Repeated for each DPL Port (network interface). [INTERFACE] interface=p0 dpl_logical_port_id=0 mtu=1514 # mac=00:00:00:00:00:00   [INTERFACE] interface=pf0hpf dpl_logical_port_id=65535 mtu=1514   [INTERFACE] interface=pf0vf0 dpl_logical_port_id=1 mtu=1514


Section DEVICE

This section defines the attributes of the DPL Device, which consists of multiple ports and supports DPL program loading. It must appear exactly once per device .conf file.

  • dpl_device_id The Device ID used by controller applications to identify the target device for connection and program management.

  • dpl_counter_cache_timeout This parameter reduces hardware accesses for counter reads. When the cache expires, a hardware access occurs upon the next request. Note that enabling this may result in the retrieval of outdated counter values.

  • idle_timeout_polling_interval Sets the polling interval (in seconds) for reading entry counters from hardware and identifying stale entries (those with no traffic for a period exceeding the defined threshold). This interval affects both Entry Timeout and Delayed Counter Statistics. Larger values impact the accuracy of timeout notifications and the refresh rate of counter statistics.

  • tcp_state_polling_interval_msec Defines the interval (in milliseconds) for reading TCP state objects from the hardware.

Section P4_RT_CONTROLLER

This section configures communication with P4 Controllers.

p4_controller_port_id P4 controller applications can send packets ("Packet Out") to the DPL Runtime Service via RPC messages. These packets are assigned a port ID equal to p4_controller_port_id. A DPL programmer can identify traffic originating from a P4 Controller by comparing std_meta.ingress_port to this value.

Section INTERFACE

This section defines the attributes of a port belonging to the device. This section may be repeated for multiple interfaces.

  • interface: The net-device interface name on the system.

  • dpl_logical_port_id: The ID used to reference the port in DPL programs and table updates. It is used for matching ingress traffic and directing egress traffic. Programmers can verify the source port by comparing std_meta.ingress_port to this value.

  • mtu: Defines the Ethernet frame size (MTU).

    Warning

    Currently, this field is ignored.

  • mac: Defines the MAC address.

    Warning

    Currently, this field is ignored.

Path: /etc/dpl_rt_service/dpl_rt.conf.

This file configures core behaviors for the DPL Runtime Service, including logging and gRPC server binding addresses.

Section LOGGING

  • log_file_path: Defines the path to the dpl_rtd log file.

  • log_level: Sets the default log level upon startup. Supported values (case-insensitive) include: DISABLE, CRITICAL, ERROR, WARNING, INFO, DEBUG, and TRACE.

    Note

    Log levels can be changed dynamically via the DPL Admin client but will not persist across restarts unless updated in this file.

RPC Server Sections

The daemon supports three distinct gRPC servers:

  1. [P4RT_RPC_SERVER]: Listens for P4Runtime client connections. A sample open-source client is included in the DPL Dev container.

  2. [DPL_ADMIN_RPC_SERVER]: Listens for dpl_admin tool connections.

  3. [DPL_NSPECT_RPC_SERVER]: Listens for DPL Debugger tool connections.

Common gRPC Server Parameters

The following parameters apply to all three server sections:

  • server_address: The TCP binding address. Use [::] (IPv6 ANY) to allow connections from all interfaces (including IPv4), or specify a specific IP address.

  • server_tcp_port: The TCP binding port. If a non-default port is configured, client applications must specify this port when connecting.

TLS Authentication

TLS authentication can be enabled independently for each server. To enable TLS, the following mutually inclusive parameters must be added to the relevant server section:

  • server_cert: Path to the server.crt file.

  • ca_cert: Path to the ca.crt file.

  • server_key: Path to the server.key file.

Section DPL_PACKET_IO

  • enabled: When set to true, initializes the Packet IO infrastructure. This is required for P4 Controller Packet IO, the "Add Entry" feature (from DPL programs), and DPL Debugger tools.

  • dpdk_log_level: Sets the DPDK log level for Packet IO operations (e.g., *:debug). Refer to the DPDK Logging Guide for details.

Section DPL_SHM

  • enabled: When set to true, initializes the Shared Memory (SHM) infrastructure, required for controller applications utilizing the DPL Runtime Controller SDK .

Example Service Daemon Configuration File

Copy
Copied!
            

# Example of a possible DPL RT Service GENERAL configuration file   [LOGGING] # Path to the dpl_rtd log file. log_file_path=/var/log/doca/dpl_rt_service/dpl_rtd.log # Log level for the DPL RT Service. # Possible log_level values (case insensitive): # DISABLE # CRITICAL # ERROR # WARNING # INFO # DEBUG # TRACE log_level=INFO   [P4RT_RPC_SERVER] server_address=[::] # IPv6 "ANY" allows IPv4 connections server_tcp_port=9559 ## If you would like to enable TLS authentication for the gRPC server connections, uncomment the following line and provided the required info: #server_cert=/path/to/server.crt #ca_cert=/path/to/ca.crt #server_key=/path/to/server.key   [DPL_ADMIN_RPC_SERVER] server_address=[::] # IPv6 "ANY" allows IPv4 connections server_tcp_port=9600 ## If you would like to enable TLS authentication for the gRPC server connections, uncomment the following line and provided the required info: #server_cert=/path/to/server.crt #ca_cert=/path/to/ca.crt #server_key=/path/to/server.key   [DPL_NSPECT_RPC_SERVER] server_address=[::] # IPv6 "ANY" allows IPv4 connections server_tcp_port=9560 ## If you would like to enable TLS authentication for the gRPC server connections, uncomment the following line and provided the required info: #server_cert=/path/to/server.crt #ca_cert=/path/to/ca.crt #server_key=/path/to/server.key   [DPL_PACKET_IO] # Enable Packet IO processing. enabled=true # DPDK --log-level value to be used when working with Packet IO. # Uncomment and set the desired log level, for example *:debug for high Debug level logs. # See more details and possible values at https://doc.dpdk.org/guides/prog_guide/log_lib.html #dpdk_log_level=*:debug   [DPL_SHM] # Enable Shared Memory (SHM) infrastructure for High Update Rate tables support using dpl-rt-controller SDK. enabled=true


Path: /etc/dpl_rt_service/system.conf.

This file handles hardware interaction tuning for the DPL Runtime Service.

Section LOGICAL_CORES

  • packet_io_lcores: Specifies the logical CPU core for Packet IO processing. Do not use lcore 0.

  • rule_insertion_lcores: Specifies the logical CPU core for rule insertions/deletions. Do not use lcore 0.

    Note

    This core must be different from cores used by any controller application utilizing the DPL Runtime Controller SDK.

Section HAL

This section configures Hardware Steering (HWS) settings.

Warning

Do not modify settings in this section unless instructed by NVIDIA Support. These settings directly impact performance characteristics such as rule update rates and latency.

  • queue_size: HWS queue size.

  • queues_num: Number of HWS queues for rule insertions/deletions.

  • burst_size: Burst size for HWS rule insertions/deletions.

Example System Configuration File

Copy
Copied!
            

[LOGICAL_CORES] # Logical CPU core for processing Packet IO (do not use lcore 0). packet_io_lcores=1 # Logical CPU core for processing rule insertions/deletions (do not use lcore 0). rule_insertion_lcores=2   [HAL] queue_size=1024 queues_num=1 burst_size=32


© Copyright 2025, NVIDIA. Last updated on Nov 20, 2025