DOCA Documentation v3.1.0

DOCA Flow Tune Tool

DOCA Flow Tune is a powerful, one-stop-shop solution, providing visibility and analysis capabilities for DOCA Flow programs.

Info

DOCA Flow Tune is supported at alpha level.

DOCA Flow Tune is a one-stop-shop solution which allows developers to visualize their traffic steering pipelines, have a live monitor of software Key Performance Indicators (KPIs) as well as hardware counters, and gain valuable performance insights about their DOCA-Flow-based program.

DOCA Flow Tune is especially useful for the following scenarios:

  • Aiding developers during the development of their traffic steering pipeline by providing visualization of the pipeline, and later also performance insights about the designed pipeline

  • Aiding developers in pre-production environments by providing live monitoring of the performance indicators of the program on both software and hardware levels, and helping detect possible bottlenecks/critical paths so they are addressed before deployment to production environments

  • Aiding administrators monitor the program in production by providing live monitoring as well as high-rate hardware counters to be used when analyzing a possible deployment/setup issue

The tool operates in three distinct modes, Monitor, Analyze, and Visualize, which are presented in the following subsections.

Note

Collecting, analyzing, and displaying information from the analyzed DOCA-Flow-based program requires the explicit activation of the DOCA Flow Tune server by the target program. For more information about that, refer to the DOCA Flow Tune Server programming guide.

Monitor Mode Overview

This mode collects and displays both hardware counters and software KPIs in real time (as extracted from the running DOCA Flow program and the underlying setup), providing a comprehensive view of the system's performance:

image-2025-7-15_11-55-50-version-1-modificationdate-1752569751290-api-v2.png

This information can also be exported to a CSV file for further analysis.

Info

For more information about this mode, please refer to section "Monitor Mode".

Info

For information about running DOCA Flow Tune in this mode, please refer to section "Monitor Command".


Analyze Mode Overview

The analyze mode supports the ability to dump the internal steering pipeline state to be used by Visualize mode.

Info

For more information about this mode, please refer to section "Analyze Mode".

Info

For more information about running DOCA Flow Tune in this mode, please refer to section "Analyze Command".


Visualize Mode Overview

This mode allows users to produce a graphical representation of their steering pipeline (as built using the DOCA Flow API), allowing developers to quickly understand their program's pipeline, and compare it with their intended architecture.

The following is an example from the DOCA IPsec Security Gateway Application Guide:

mermaid-diagram-2025-07-27-114218-version-2-modificationdate-1753814297307-api-v2.png

Mermaid Graph

The DOCA Flow Tune Mermaid graph visually represents the pipeline structure and metadata. It uses the following conventions:

  • Shapes:

    • Square: Denotes an input port or a root pipe. These are also outlined with a red frame for clarity.

    • Hexagon: Represents a pipe or an output port.

    • Arrow: Indicates a possible connection between pipes and ports.

  • Pipe Data Layers:

    Pipe metadata is organized into multiple visual layers. Layers can be stacked and selectively displayed when analyzing the pipeline.

    • Base Layer (default):

      • Includes general pipe attributes:

        • Name, label, type, domain

      • Mask, monitor configuration, and action data

    • Cost Layer:

      • Displays the minimum and maximum cost a packet can incur while traversing the pipe.

      • Overlays on top of the base layer.

    • Critical Path Layer:

      • Highlights the pipeline path with the highest total processing cost.

      • If multiple paths have the same cost, a single one is arbitrarily highlighted.

    • Resources Layer:

      • Shows resource utilization per pipe, including:

        • Number of counters

        • Number of meters

        • Action memory usage

        • RAM and hugepages consumption

      • This layer is an overlay on the base layer.

Info

For more information about this mode, please refer to section "Visualize Mode".

Info

For more information on running DOCA Flow Tune in this mode, please refer to section "Visualize Command".

DOCA Flow Tune depends on the following DOCA SDK libraries:

  • DOCA 3.0.0 and higher.

  • For optimal experience, it is recommended to comply with the prerequisites of all the listed dependencies, and especially with their recommended firmware versions.

To execute DOCA Flow Tune tool:

Copy
Copied!
            

Usage: doca_flow_tune [Program Commands] [DOCA Flags] [Program Flags]   Program Commands: analyze Run Flow Tune in Analyze mode monitor Run Flow Tune in Monitor mode visualize Run Flow Tune in Visualize mode   DOCA Flags: -h, --help Print a help synopsis -v, --version Print program version information -l, --log-level Set the (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE> --sdk-log-level Set the SDK (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE>

Info

This usage printout can be printed to the command line interface (CLI) using the -h (or --help) option:

Copy
Copied!
            

doca_flow_tune -h

The same applies for each of the tool's commands (and subcommands). For instance:

Copy
Copied!
            

doca_flow_tune monitor -h

Monitor Command

The monitor command presents software KPIs and hardware counters. Each component offers various options, which can be specified in the configuration file under the monitor section, or through the CLI.

Copy
Copied!
            

Usage: doca_flow_tune monitor [Program Commands] [DOCA Flags] [Program Flags]   Program Commands: background Collect software key performance indicators and hardware counters on the background   DOCA Flags: -h, --help Print a help synopsis -l, --log-level Set the (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE> --sdk-log-level Set the SDK (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE>   Program Flags: --enable-csv Enable dumping data to CSV file --disable-csv Disable dumping data to CSV file --csv-file-name CSV file name to create --hw-profile Register hardware profile {basic, full} --sw-profile Register software profile -f, --cfg-file JSON configuration file

Supported sub-commands:

  • background – This subcommand allows performing CSV dumping without displaying the output on the screen. This is useful for scenarios where one wants to log counters without cluttering the terminal. It also supports high-rate dumping for hardware counters which may be activated using the --high-rate flag.

    Copy
    Copied!
                

    Usage: doca_flow_tune monitor background [DOCA Flags] [Program Flags]   DOCA Flags: -h, --help Print a help synopsis -l, --log-level Set the (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE> --sdk-log-level Set the SDK (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE>   Program Flags: --high-rate Enable dumping hardware counters data to CSV file in high rate --hw-profile Register hardware profile {basic, full} --sw-profile Register software profile

CLI Examples

  • To launch the monitor command with a given configuration file:

    Copy
    Copied!
                

    doca_flow_tune monitor -f /tmp/flow_tune_cfg.json

  • To launch the monitor command with both a given configuration file and a CLI parameter for specifying the desired hardware counters profile:

    Copy
    Copied!
                

    doca_flow_tune monitor -f /tmp/flow_tune_cfg.json --hw-profile basic

  • To launch the monitor command with the background subcommand and the request to perform a high-rate collection and export for the hardware counters:

    Copy
    Copied!
                

    doca_flow_tune monitor -f /tmp/flow_tune_cfg.json background --high-rate

    Note

    The tool silently creates and updates the flow_tune.csv file.

Analyze Command

The analyze command runs a specified set of analysis methods over the target DOCA Flow program. The analysis supports the ability to export a JSON description of the steering pipeline, as is used by the visualize command, and could later be used for future analysis methods (both online or offline).

Copy
Copied!
            

Usage: doca_flow_tune analyze export [DOCA Flags] [Program Flags]   DOCA Flags: -h, --help Print a help synopsis -l, --log-level Set the (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE> --sdk-log-level Set the SDK (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE>   Program Flags: --file-name File name on which the pipeline information will be saved -f, --cfg-file JSON configuration file

Supported subcommands:

  • export – This command allows the tool to export a running DOCA Flow program's pipeline into a JSON file. This file is the main input for other features of the tool, such as the graphical visualization.

    Note

    The export subcommand is currently mandatory.

CLI Examples

  • To launch the analyze command without a configuration file:

    Copy
    Copied!
                

    doca_flow_tune analyze export

    The JSON file is stored into its default path.

  • To launch the analyze command with a given configuration file that specifies the desired values for all needed configurations:

    Copy
    Copied!
                

    doca_flow_tune analyze export -f /tmp/flow_tune_cfg.json

  • To launch the analyze command with a configuration file while also configuring the output path for the exported JSON file through the CLI:

    Copy
    Copied!
                

    doca_flow_tune analyze export -f /tmp/flow_tune_cfg.json --file-name my_program_pipeline_desc.json

    The exported pipeline is stored as my_program_pipeline_desc.json into the chosen/default output directory.

Visualize Command

The visualize command visualizes the steering pipeline of a given DOCA Flow program. The command works on a given JSON file as input. This file can either be generated by the analyze export command or queried dynamically from a running program, in which case the command would dump the pipeline from the program and then generate the visualization output file.

Info

The visualization output file is a Mermaid markdown format.

Info

This file can be fed to any of the widely available Mermaid visualization tools, as explained in depth in the corresponding section "Visualize Mode".

Copy
Copied!
            

Usage: doca_flow_tune visualize [DOCA Flags] [Program Flags]   DOCA Flags: -h, --help Print a help synopsis -l, --log-level Set the (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE> --sdk-log-level Set the SDK (numeric) log level for the program <10=DISABLE, 20=CRITICAL, 30=ERROR, 40=WARNING, 50=INFO, 60=DEBUG, 70=TRACE>   Program Flags: --pipeline-desc Input JSON file that represents the Flow application pipeline --file-name File name on which the visualization information will be saved -f, --cfg-file JSON configuration file --cost Show cost --critical-path Show critical path --resources Show resources --distribution Show distribution

CLI Examples

  • Launching the visualize command without a configuration file triggers a live query of the pipeline from the currently running DOCA Flow program. In the absence of an input JSON file, the tool automatically generates and dumps a JSON representation of the active pipeline to be used for Mermaid diagram generation.

    Copy
    Copied!
                

    doca_flow_tune visualize

    Example output:

    Copy
    Copied!
                

    2025-04-09 15:06:44 - flow_tune - INFO - Pipeline description file not provided or doesn't exist, dumping pipeline description to /tmp/tmp_pipeline_desc.json 2025-04-09 15:06:44 - flow_tune - INFO - Flow program pipeline information was exported to "/tmp/tmp_pipeline_desc.json" 2025-04-09 15:06:44 - flow_tune - INFO - Mermaid graph exported to /tmp/flow_tune/flow_tune_pipeline_vis.md 2025-04-09 15:06:44 - flow_tune - INFO - DOCA Flow Tune finished successfully

  • Launching the visualize command with a given configuration file that specifies the desired values for all needed configurations:

    Copy
    Copied!
                

    doca_flow_tune visualize -f /tmp/flow_tune_cfg.json

  • To launch the visualize command with a configuration file while configuring the output path for the Mermaid file through the CLI and providing an offline pipeline file:

    Copy
    Copied!
                

    doca_flow_tune visualize -f /tmp/flow_tune_cfg.json --file-name my_program_pipeline_viz.md --pipeline-desc my_program_pipeline_desc.json

    The exported Mermaid file is stored as my_program_pipeline_viz.md into the chosen/default output directory. Because the pipeline description file is explicitly provided, this command could be used offline, as it would not need connection with the DOCA Flow program to visualize.

    Note

    To launch the visualize command with the --critical-path or --cost flags, the low_level_info setting must be enabled (true):

    • When using a configuration file, the low_level_info field must be set to true in the analyze section of the provided configuration file. For example:

      Copy
      Copied!
                  

      doca_flow_tune visualize --cost -f /tmp/flow_tune_cfg.json # flow_tune_cfg.json must include: "low_level_info": true

    • When using an offline pipeline JSON file, the file must have been generated with low_level_info enabled at the time of the dump:

      Copy
      Copied!
                  

      doca_flow_tune visualize --cost --pipeline-desc my_program_pipeline_desc.json # my_program_pipeline_desc.json must have been dumped with "low_level_info": true

DOCA Flow Tune has a configuration file which allows customizing various settings.

Info

The configuration file is divided into sections to simplify its usage.

Config File Default Values

If a configuration file is not provided, DOCA Flow Tune uses its default values for fields which are mandatory.

Info

A list of all default values can be seen in the appendix.

In Monitor mode, if a software KPIs or hardware counters query is not needed, removing the hardware or software fields from the configuration file disables the respective feature.

Custom Config File

Instead of using default configuration values, users can create a file of their own and provide a file path when running DOCA Flow Tune (-f/--cfg-file).

Once used, DOCA Flow Tune loads all provided values directly from the file, while the rest of the fields (if any) use their respective default values.

Overriding Config Values from CLI

Setting s ome of the fields in the configuration file is supported through CLI using the --file-name flag. If used, the provided values from the CLI would override the values of the fields from the configuration file. This allows for easier configuration of common values without the need to create a new custom file or to modify an existing configuration file.

Common Configuration Values

Some sections of the configuration file are shared between multiple runtime modes of DOCA Flow Tune (i.e., Monitor, Analyze, Visualize) and generally have to do with the output/input file paths and interaction with the live DOCA Flow program.

flow_tune_cfg.json

Copy
Copied!
            

{ ... "outputs_directory": "/tmp/flow_tune/", ... "network": { "server_uds": "/tmp/tune_server.sock", "uds_directory": "/var/run/doca/flow_tune/" }, ... }

Output Directory

outputs_directory defines the main directory on which all output products are saved. This field does not have a default value. If no value is provided, DOCA Flow Tune files are saved at the following directories:

  • CSV file – /var/log/doca/flow_tune/

  • Analyze export pipeline description file – /tmp/flow_tune/

  • Pipeline visualization file – /tmp/flow_tune/

Connection to DOCA Flow Tune Server

Some features of DOCA Flow Tune work by interacting with a live DOCA Flow based program. This is enabled through a server that is running in the background as part of the DOCA Flow library, and requires all the following to be applied:

  1. DOCA Flow based program should explicitly enable the server.

    Info

    More information is available in the relevant DOCA Flow Tune Server programming guide.

  2. The DOCA-Flow-based program should run using the trace-enabled DOCA Flow library.

    Info

    More information is available in the "Debug and Trace Features" section of the DOCA Flow programming guide.

DOCA Flow Tune should be configured in a way to allow it to connect to the matching server. This can be done by modifying the following variables under the network section of the configuration file:

  • server_uds – DOCA Tune Server Unix Domain Socket (UDS) path. Default value is /tmp/tune_server.sock.

  • uds_directory – Directory on which all local UDSs are created. Default value is /var/run/doca/flow_tune/.

Hardware Counters

This table provides the supported hardware counters and their associated profiles.

Counter Name

Description

Unit

Profile Basic

Profile Full

RX Packet Rate

The number of received packets per second

pkt/sec

check.svg

check.svg

RX Bandwidth

The data transfer rate based on the number of packets received per second

Gb/s

check.svg

check.svg

RX Packet Average Size

The average size of received data packets

Bytes

check.svg

check.svg

TX Packet Rate

The number of packets transmitted per second

pkt/sec

check.svg

check.svg

TX Bandwidth

The data transfer rate based on the number of packets transmitted per second

Gb/s

check.svg

check.svg

TX Packet Average Size

The average size of transmitted data packets

Bytes

check.svg

check.svg

RX SW Drops 1 2

The number of dropped packets due to a lack of WQE for the associated QPs/RQs (excluding hairpin QPs/RQs)

drops/sec

check.svg

check.svg

Hairpin Drops 3 2

The number of dropped packets due to a lack of WQE for the associated hairpin QPs/RQs

drops/sec

check.svg

check.svg

RX HW Drops 4

The number of packets discarded due to no available data or descriptor buffers in the RX buffer

drops/sec

check.svg

check.svg

ICM Cache Miss Rate

The rate of data requests that miss in the ICM (interconnect context memory) cache

events/sec

error.svg

check.svg

ICM Cache Miss per Packet

The number of data requests that miss per packet

events/pkt

error.svg

check.svg

PCIe Inbound Bandwidth 5

The number of bits received from the PCIe toward the device per second.

Gb/s

error.svg

check.svg

PCIe Outbound Bandwidth 5

The number of bits transmitted from the device toward the PCIe per second

Gb/s

error.svg

check.svg

PCIe AVG Read latency 5

The average PCIe read latency for all read data

nsec

error.svg

check.svg

PCIe Max Latency 5

T he maximum latency (in nanoseconds) for a single PCIe read from the device

nsec

error.svg

check.svg

PCIe Min Latency 5

T he minimum latency (in nanoseconds) for a single PCIe read from the device

nsec

error.svg

check.svg

RX Hops Per Packet

The number of hops per packet in RX domain

hops/pkt

error.svg

check.svg

TX Hops Per Packet

The number of hops per packet in TX domain

hops/pkt

error.svg

check.svg

  1. If drops are observed, this may be because the software was unable to process all received packets. Consider reducing CPU processing time or increasing the number of utilized cores and queues.     

  2. Supported only on NVIDIA® ConnectX®-7 and above.            

  3. If drops are observed, the Tx packet processing is probably causing a bottleneck. Consider simplifying the process or adjusting the number or size of hairpin queues, or implementing locking mechanisms.     

  4. If drops are observed, the Rx packet processing is probably causing a bottleneck. Consider simplifying it.     

  5. PCIe counters are supported only on the host side.                                 

Software Key Performance Indicators

The following table lists the supported software Key Performance Indicators (KPIs) and their associated profiling categories:

Key Performance Indicator

Description

Units

Profile

Insertion rate

Number of successful table entry insertions per queue per second

actions/sec

entries_ops_rates

Deletion rate

Number of successful table entry deletions per queue per second

actions/sec

entries_ops_rates

Action resources

Number of resources utilized by the program for all user-defined actions

N/A

action_resources

Action Resources Counters

The action_resources section displays the following per-port counters, including both the total allocated and currently used amounts:

Resource Counters:

  • Number of Shared Counters

  • Number of Counters

  • Number of Shared Meters

  • Number of Meters

  • Actions Memory

Used Memory Resources:

  • RAM

  • HugePages Memory

Configuration

CSV Format

The CSV format stores two types of rows, specific to each counter module:

  • Hardware Counter Rows (Module ID=0)

    Module ID

    HW Counter ID

    Counter Value

    Timestamp

    0

    1

    8

    142623139459

    0

    2

    197503959728

    142623139459

    • Module ID – Hardware module identifier

    • HW Counter ID – Unique identifier for the hardware counter

    • Counter Value – Counter value

    • Timestamp – Hardware timestamp

      Hardware Counter ID Mapping

HW Counter ID

Description

Units

0

Rate of RX packets on port 0

Packets per second

1

Rate of RX packets on port 1

Packets per second

2

RX bandwidth on port 0

Gb/s

3

RX bandwidth on port 1

Gb/s

4

Average RX packet size on port 0

Bytes

5

Average RX packet size on port 1

Bytes

6

Rate of TX packets on port 0

Packets per second

7

Rate of TX packets on port 1

Packets per second

8

TX bandwidth on port 0

Gb/s

9

TX bandwidth on port 1

Gb/s

10

Average TX packet size on port 0

Bytes

11

Average TX packet size on port 1

Bytes

12

ICMC misses rate

Events per second

13

ICMC misses per packet

Events per packet

14

The bandwidth of bytes received from PCIe toward the device

Gb/s

15

The bandwidth of bytes transmitted from the device toward PCIe

Gb/s

16

The average PCIe read latency

Nanoseconds

17

The total latency for all PCIe read from the device

Nanoseconds

18

The total number of PCIe packets

Events

19

The maximum latency for a single PCIe read from the device

Nanoseconds

20

The minimum latency for a single PCIe read from the device

Nanoseconds

21

RX software drops

Drops per second

22

Hairpin drops

Drops per second

23

RX hardware drops

Drops per second

26

RX Hops Per Packet

[Hops/Packet]

27

TX Hops Per Packet

[Hops/Packet]

  • Software KPI Rows (Module ID=1)

    Module ID

    Port ID

    SW Counter Type

    Counter Value

    Timestamp

    1

    0

    Queue 0 insertion rate

    34511

    1727345744137828

    1

    1

    Queue 0 insertion rate

    37050

    1727345755137828

    1

    0

    Action resource 64B allocated shared counters

    4

    1727345755137828

    1

    0

    Action resource 64B used shared counters

    3

    1727345755137828

    • Module ID – Software module identifier

    • Port ID – Software port ID

    • SW KPI Type – KPI type

    • KPI Value – KPI value

    • Timestamp – Software timestamp

Configuration File

DOCA Flow Tune's configuration file consists of two main parts of relevance for Monitor mode:

  • csv dump object

  • monitor configuration object

The following is an example for both sections:

flow_tune_cfg.json

Copy
Copied!
            

{ ... "csv": { "enable": false, "file_name": "flow_tune.csv", "max_size_bytes": 1000000, "max_files": 1 },   ...   "monitor": { "screen_mode": "dark", // modes: {light, dark} "hardware": { "pci_addresses": [ "b1:00.0", "b1:00.1" ], "profile": "full" // profiles: {basic, full} }, "software": [ { "flow_port_id": 0, "profiles": [ "entries_ops_rates" // profiles: {entries_ops_rates, action_resources} ] }, { "flow_port_id": 1, "profiles": [ "entries_ops_rates" ] } ] } ... }

CSV Configuration Section

CSV dumping allows exporting the hardware and software counters collected by the tool into a CSV file for further analysis or record keeping. This is particularly useful for logging performance metrics over time.

How to Enable CSV Dumping

To enable CSV dumping, modify the configuration in the JSON file as follows:

Copy
Copied!
            

{ "csv": { "enable": true, "file_name": "flow_tune.csv", "max_size_bytes": 1000000, "max_files": 1 } }

The supported fields are:

  • enable – Set to true to enable CSV dumping or false to disable it. Default value is false.

  • file_name – The name of the CSV file where the data will be saved.

  • max_size_bytes – The maximum size (in bytes) of the CSV file. Once this limit is reached, a new file is created based on the max_files setting.

  • max_files – The maximum number of CSV files to keep. When this limit is reached, the oldest files are deleted.

CSV dumping can also be enabled or disabled from the CLI using the --enable-csv or --disable-csv flags, respectively. For example:

Copy
Copied!
            

doca_flow_tune monitor -f /tmp/flow_tune_cfg.json --enable-csv

Additionally, the CSV filename can be updated by using the --csv-file-name flag, for example:

Copy
Copied!
            

doca_flow_tune monitor -f /tmp/flow_tune_cfg.json --csv-file-name "counters_dump.csv"

Monitor Configuration Section

Screen Mode

The Monitor module supports two screen modes: dark and light.

Hardware

The hardware section includes the pci_addresses and profile fields:

  • The pci_addresses field expects an array of PCIe addresses for NIC ports. The tool uses these addresses to retrieve the corresponding NIC device and the desired port IDs.

    Note

    PCIe addresses must belong to the same device.

    Info

    The tool supports up to two ports per device.

  • The profile field expects to receive either a basic or full profile.

    • basic profile – includes packet- and port-related counters (i.e., Bandwidth, Packets Per Second, Average Packet Size, Packet Drops)

    • full profile – includes all the basic counters and adds additional debug counters (e.g., ICMC and PCIe counters)

      Info

      For more information about the counters please refer to section "Hardware Counters".

The hardware counters profile can be set from the CLI by adding --hw-profile. For example:

Copy
Copied!
            

doca_flow_tune monitor -f /tmp/flow_tune_cfg.json --hw-profile basic


Software

The software section includes the flow_port_id and profiles fields:

  • flow_port_id field – expects a single DOCA Flow port identification number. Flow port ID should be set by the DOCA Flow program, by calling the doca_flow_port_cfg_set_devargs() API call with a proper port ID string.

  • profiles field – accepts one or more of the supported profiling categories.

    • entries_ops_rates profile – includes both insertion and deletion rates KPIs

    • action_resources profile – provides data on allocated and in-use resources, including counters, meters, and actions memory .

The software profile can be set from the CLI by adding --sw-profile, for example:

Copy
Copied!
            

doca_flow_tune monitor -f /tmp/flow_tune_cfg.json --sw-profile entries_ops_rates

Analyze mode gathers (and later analyzes) information in order to assist users to better understand and debug their DOCA-Flow-based program.

Pipeline Export

This tool export an internal state of the DOCA-Flow-based program in a proprietary JSON format. This allows the tool to provide offline information about a given program which can be later be analyzed. One such example is the ability to visualize the pipeline of the target program without having said program run on real hardware.

While the pipeline export operation is meant to encode all relevant information for future analysis, the format itself is proprietary and is only meant to be consumed by other DOCA tools.

The doca_flow_tune visualize command generates a visual representation of a DOCA Flow pipeline using the Mermaid diagram format.

Supported Pipe Types

The following pipe types are supported in visualization mode:

  • Basic

  • Control

  • Hash

  • CT (partially supported):

    • Pipe-level FWD and FWD miss

    • Basic pipe data only

Viewing the Pipeline

Running the visualize command produces a Mermaid-format file representing the pipeline structure. Mermaid is a text-based diagramming language supported by many documentation tools and editors. One commonly used online editor is Mermaid Live, which can be used to render and inspect the generated pipeline diagram.

To view the pipeline:

  1. Open Mermaid Live.

  2. Copy the contents of the generated .mmd file.

  3. Paste the contents into the editor to visualize the pipeline.

Visualization Layers

Each Mermaid diagram can include multiple layers, each offering a distinct view into pipeline characteristics. If no specific layer is selected, the base layer is used by default.

Base Layer

This layer visualizes the static configuration of the DOCA Flow pipeline, including:

  • Pipe node data:

    • Name

    • Label

    • Domain

    • Type

    • Mask configuration

    • Defined actions

    • Monitor attributes

  • Pipe link (arrow) data:

    • For Control pipes: matching values (e.g., * indicates match-all).

    • For other pipes: FWD_MISS links are marked accordingly.

mermaid-diagram-2025-07-27-114218-version-2-modificationdate-1753814297307-api-v2.png

Resources Layer

Enabled via the --resources flag, this layer adds information about resource utilization:

  • Pipe node data:

    • Total number of actions in use by entries (all pipe types)

    • Resource breakdown per action (non-Control pipes only)

  • Resources visualized:

    • Counters

    • Meters

    • Actions memory

    • Entry count

    • RAM usage

    • Huge pages consumption

image-2025-7-27_12-13-21-version-1-modificationdate-1753607601850-api-v2.png

Cost Layer

Enabled via the --cost flag, this layer estimates the cache impact of processing packets through the pipeline:

  • Minimum and maximum estimated cache misses per pipe

  • Helps identify high-cost areas in the pipeline

image-2025-7-27_12-14-34-version-1-modificationdate-1753607675147-api-v2.png

Critical Path Layer

Enabled via the --critical-path flag, this layer highlights the pipeline path with the highest estimated processing cost (i.e., the path with the most cache misses). If multiple paths are tied, one is selected arbitrarily.

critical_path_cropped-version-1-modificationdate-1750226202930-api-v2.png

Distribution Layer

Enabled via the --distribution flag, this layer simulates traffic flow through the pipeline using a PCAP input file.

Requirements:

  • A PCAP file must be provided via the --pcap-file flag or configuration file

  • low_level_info must be set to true in the configuration if analyzing offline

Output data:

  • Pipe node data:

    • Distribution: Percentage of all packets that passed through this pipe

      • Example: Total: 62%

  • Pipe link data:

    • Distribution: Percentage of packets from the source pipe that used this link

      • Example: Pipe: 78%

image-2025-7-16_14-9-31-version-1-modificationdate-1752664173937-api-v2.png

  • Match values for the following fields are not supported:

    • outer.l2_type

    • outer_l3_type

    • outer_l4_type

    • tun.gre.protocol

    • tun.key_present

  • The Distribution layer functions only when using supported match fields. Unsupported fields may result in inaccurate or incomplete packet flow analysis.

Actions Memory Calculation

The DOCA Flow API requires the application to configure the maximum actions memory using the doca_flow_port_cfg_set_actions_mem_size() function. There are two strategies for determining the optimal value:

Measured Strategy

  • Configure the application with the maximum supported actions memory.

  • Run the application under expected load.

  • Execute the Flow Tune tool in Monitor mode.

  • Use the reported memory usage from the tool to determine the optimal actions memory size.

  • Update the application configuration to match the measured usage, ensuring compliance with the DOCA Flow API.

image-2025-4-10_14-43-35-version-1-modificationdate-1750226210210-api-v2.png

Calculated Strategy

  • Start with a minimal actions memory size in the application configuration.

  • After pipe creation (but before adding entries), run the Flow Tune tool in Visualization mode with the Resources Layer enabled.

  • For each pipe:

    • Multiply the expected number of entries by the reported actions memory per entry.

  • Sum the memory requirements across all pipes to obtain the total actions memory needed.

  • Set this value in the application using the appropriate DOCA Flow API.

image-2025-4-10_14-42-5-version-1-modificationdate-1750226210663-api-v2.png

Counter Observation

Use the tool in Monitor mode to observe hardware and software counters in real time. The tool provides per-second updates on:

  • Entry insertion and deletion rates

  • Memory and resource utilization

  • Other performance metrics

This helps identify performance bottlenecks and track system behavior under load.

Program Pipeline Visualization

By running the tool in Visualization mode, the developer can:

  • Generate a graphical representation of the DOCA Flow program pipeline

  • Understand the packet processing path and logic

  • Debug and validate flow construction

  • Optimize the pipeline for performance and correctness

Identifying Bottle Neck Pipes

Running the tool in Visualization mode with the --cost and --critical-path flags enables detection of performance-critical paths:

  • Highlights pipes with the highest cache miss estimates

  • Marks the most critical path (i.e., the slowest or most resource-intensive)

  • Helps developers focus optimization efforts on the slowest segments of the pipeline

Telemetry fwctl driver is not loaded

Error

When running the DOCA Flow Tune in Monitor mode, the following log messages are encountered at startup:

Copy
Copied!
            

[DOCA][WRN][priv_doca_telemetry_fwctl.cpp:121][priv_doca_telemetry_fwctl_find_device_by_pci] Failed finding fwctl device: Opening directory /sys/class/fwctl/ failed. Make sure you have the fwctl driver loaded [DOCA][ERR][priv_doca_telemetry_fwctl.cpp:201][priv_doca_telemetry_fwctl_open_by_devinfo] devinfo 0x55c572286520: Failed to open fwctl device: Failed to find matching fwctl device


Solution

The DOCA Telemetry SDK uses the fwctl driver to query the hardware counters, so it is essential to have it installed and loaded.

Step 1: Verify the Driver Installation

First, check if the driver is installed as follow:

  • Debian/Ubuntu:

    Copy
    Copied!
                

    $ sudo apt list --installed | grep fwctl

  • RHEL:

    Copy
    Copied!
                

    $ sudo yum list installed | grep fwctl

If the driver is not installed, install it by running the following commands:

  • Debian/Ubuntu:

    Copy
    Copied!
                

    $ sudo apt search fwctl >> <fwctl-package-name>/....   $ sudo apt install -y <fwctl-package-name>

  • RHEL:

    Copy
    Copied!
                

    $ sudo yum search fwctl >> <fwctl-package-name>/....   $ apt/yum install -y <fwctl-package-name>

Step 2: Check if the Driver is Loaded

After installing the driver, verify that it is loaded by executing:

Copy
Copied!
            

$ sudo lsmod | grep fwctl

You should see output similar to:

Copy
Copied!
            

> mlx5_fwctl 20480 0 > fwctl 16384 1 mlx5_fwctl > mlx5_core 2134016 2 mlx5_fwctl,mlx5_ib > mlx_compat 69632 14 rdma_cm,ib_ipoib,mlxdevm,mlxfw,mlx5_fwctl,iw_cm,ib_umad,fwctl,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core

If the driver is not loaded, load it by running:

Copy
Copied!
            

$ sudo modprobe mlx5_fwctl

Mermaid visualization in Visual Studio Code

Visual Studio Code provides extensions to view Mermaid markdown format, these extensions can be used to view the Mermaid output from DOCA Flow Tune tool.

However, for these extension to work, the Mermaid file should be modified with Mermaid opening and closure lines as follows:

Copy
Copied!
            

```mermaid <original_mermaid_file_content> ```


Limited Feature Set – Could Not Detect Running DOCA Flow Program

Error

When running DOCA Flow Tune, the following log message is encountered at startup, followed by some features failing to work/load:

Copy
Copied!
            

[DOCA][WRN][flow_tune.cpp:195][get_flow_app_data] Could not detect a running DOCA Flow program, some features will be impacted


Solution

Some features of DOCA Flow Tune work by interacting with a live DOCA-Flow-based program. This is enabled through a server running in the background as part of the DOCA Flow library, and requires all of the following to be applied:

  • DOCA-Flow-based program should explicitly enable the server. More information is available in the DOCA Flow Tune Server programming guide.

  • DOCA-Flow-based program should run using the "trace enabled" DOCA Flow library. More information is available in the "Debug and Trace Features" section of the DOCA Flow programming guide.\

visualize --cost or --critical-path Not Working

Error

When running DOCA Flow Tune in visualize mode with the --cost or --critical-path flags, the following error may occur:

Copy
Copied!
            

2025-04-14 12:24:20 - flow_tune - ERROR - Low level info need to be enabled to visualize cost or critical path 2025-04-14 12:24:20 - flow_tune - ERROR - Failed to export mermaid graph 2025-04-14 12:24:20 - flow_tune - ERROR - DOCA Flow Tune failed during "visualize" command: Invalid input 2025-04-14 12:24:20 - flow_tune - ERROR - DOCA Flow Tune finished with errors


Solution

The --cost and --critical-path options require low-level pipeline information, which is not enabled by default.

To resolve this issue, ensure that "low_level_info": true is specified in one of the following ways:

  1. Option 1 – configuration file:

    Copy
    Copied!
                

    doca_flow_tune visualize --cost -f /tmp/flow_tune_cfg.json

    Make sure the config file (flow_tune_cfg.json) includes:

    Copy
    Copied!
                

    "analyze": { "low_level_info": true }

  2. Option 2 – offline pipeline description:

    Copy
    Copied!
                

    doca_flow_tune visualize --cost --pipeline-desc my_program_pipeline_desc.json

    Ensure that my_program_pipeline_desc.json was originally exported with "low_level_info": true enabled during analysis or dump.

flow_tune_cfg.json

Copy
Copied!
            

{ "outputs_directory": "/tmp/flow_tune/",    "logging": { "developer_log": "/var/log/doca/flow_tune/flow_tune_dev.log", "operational_log": "/var/log/doca/flow_tune/flow_tune.log" }, "network": { "server_uds": "/tmp/tune_server.sock", "uds_directory": "/var/run/doca/flow_tune/" }, "csv": { "enable": false, "file_name": "flow_tune.csv", "max_size_bytes": 1000000000, "max_files": 1 }, "analyze": { "file_name": "flow_tune_pipeline_desc.json",    "low_level_info": false }, "visualize": { "pipeline_desc_file": "/tmp/flow_tune/flow_tune_pipeline_desc.json", // Non-mandatory field "file_name": "flow_tune_pipeline_vis.md", "pcap_file": "sample_pcap.pcap" }, "monitor": { "screen_mode": "light", "hardware": { "pci_addresses": [ "08:00.0", "08:00.1" ], "profile": "full" }, "software": [ { "flow_port_id": 0, "profiles": [ "entries_ops_rates", "action_resources" ] }, { "flow_port_id": 1, "profiles": [ "entries_ops_rates", "action_resources" ] } ] } }

Where:

  • outputs_directory – Main directory on which all output products are saved. This field does not have a default value. If no value is provided, DOCA Flow Tune files are saved at the following directories:

    • CSV file – /var/log/doca/flow_tune/

    • Analyze export pipeline description file – /tmp/flow_tune/

    • Pipeline visualization file – /tmp/flow_tune/

  • logging - Log files directory.

    • developer_log – developer level log file.

    • operational_log – user level log file.

  • network

    • server_uds – DOCA Tune Server Unix Domain Socket (UDS) path. Default value is /tmp/tune_server.sock.

    • uds_directory – Directory on which all local UDS is created. Default value is /var/run/doca/flow_tune/.

  • csv

    • enable – true if information should be saved into a CSV file. Default value is false.

    • file_name – CSV filename. Default value is flow_tune.csv.

    • max_size_bytes – CSV file maximum size in bytes. When the limit is reached, a new file is created. Default value is 1Gb.

    • max_files – Maximum CSV files to create. Default value is 1.

  • analyze

    • file_name – Flow program pipeline description filename. File is created under outputs_directory path. Default value is flow_tune_pipeline_desc.json.

    • low_level_info – Pipeline description JSON file to include low level information.

  • visualize

    • pipeline_desc_file – Flow program pipeline description input file path. This file is the product of the analyze export command. If not provided, then the visualize command triggers a creation of it (a run of analyze export).

    • file_name – Flow program pipeline visualization filename. File is created under the outputs_directory path. Default value is flow_tune_pipeline_vis.md.

    • pcap_file – PCAP file to use when running distribution algorithm.

  • monitor

    • screen_mode – Monitor command theme to be used. Default value is light.

    • hardware

      • pci_addresses – List of PCIe addresses which DOCA Flow Tune should inspect.

      • profile – Hardware profile to be used for each PCIe address given. Default value is full.

    • software

      • flow_port_id – Flow program port identification number which DOCA Flow Tune should inspect.

      • profiles – List of software profiles to be used for the specific port identification number given. Default value is [entries_ops_rates].

© Copyright 2025, NVIDIA. Last updated on Aug 25, 2025.