NVIDIA DOCA gRPC Infrastructure User Guide

This guide provides an overview and configuration instructions for gRPC-supported DOCA infrastructure, programs, and services for NVIDIA® BlueField® DPU.

The recommended setup for deploying most DOCA programs is deployment on the DPU itself. However, in some cases it is useful to be able to manage and configure the programs running on top of it directly from the host (x86).

grpc-infra-version-1-modificationdate-1707421052070-api-v2.png

For this purpose, DOCA includes built-in gRPC support, thereby allowing a program to expose to the host its logical interface in the form of a gRPC-equivalent API. This guide elaborates on the different components that enable this support, including instructions for developers who wish to create similar gRPC extensions to their own DOCA-based programs.

Refer to the NVIDIA DOCA Installation Guide for Linux for details on how to install BlueField related software.

gRPC is Google's open-source remote procedure call (RPC) library and is the most widely used RPC solution.

gRPC consists of two layers:

  • Protobuf – Google library for semantically defining message formats

  • gRPC "services" – definitions of the exposed RPC functionality

gRPC's support for different language bindings, combined with the unified protocol implementation, allows the client and server to run on different machines using different programming languages.

It is important to differentiate between and separate the program's data path and the gRPC management interface. The following figure contains an overview of a sample setup for a bump-on-the wire program:

ovs-architecture-version-1-modificationdate-1707421051813-api-v2.png

As can be seen above, the program is a bump-on-the-wire, and the gRPC-related traffic flows through a separate OVS bridge connected to the host using a virtual function (VF). More information about VFs and how to configure them can be found in the NVIDIA DOCA Virtual Functions User Guide .

The architecture above allows us to associate a network address to SF1, effectively making the program's gRPC server part of an IP network with the host. This allows for an easy client-server setup, masking away the hardware details from the logical gRPC interface.

Some of DOCA's reference applications have integrated gRPC support and require a gRPC development setup for their recompilation. DOCA's reference applications provide a meson_options.txt file that controls the gRPC support and that is set to "off" by default.

Copy
Copied!
            

option('enable_grpc_support', type: 'boolean', value: false, description: 'Enable all gRPC based DOCA applications.')

Once a gRPC setup is installed, according to the instructions in the next section, the meson_options.txt file can be updated to enable gRPC support so as to allow for recompilation of the gRPC-enabled applications.

Copy
Copied!
            

option('enable_grpc_support', type: 'boolean', value: true, description: 'Enable all gRPC based DOCA applications.')

Installing gRPC Development Setup

Rebuilding a gRPC-enabled DOCA application on BlueField requires a gRPC development setup which is provided as part of DOCA's development packages. Due to the lack of packaging support for gRPC, at least for C/C++ environments, gRPC is compiled from the source and can be found at /opt/mellanox/grpc. Upon compilation, Meson ensures that the requirements exist.

Note

To troubleshoot a requirement that is reported to be missing by Meson, refer to the NVIDIA DOCA Troubleshooting Guide.

If a different gRPC version is needed, these are Google's instructions for creating a development setup. If a different gRPC setup is used, do not forget to update the necessary environment variables to point at your installation directory.

Running Program Server on BlueField

The gRPC-enabled program is equivalent to the "regular" DOCA program as it:

  • Requires the same configuration steps (huge pages, etc.)

  • Requires the same program command line arguments

The only difference is that the program also requires one more command line argument:

Copy
Copied!
            

doca_<program_name>_grpc [DPDK flags] -- [DOCA flags] [Program flags] -g/--grpc-address <ip-address[:port]>

  • ip-address – IP address to be used by the gRPC server

  • port – TCP port to be used by the server instead of the program's default gRPC port (optional)

One could also use the json configuration file as follows:

Copy
Copied!
            

doca_<program_name>_grpc -j/--json <path to grpc configuration json file>

For more information, refer to DOCA Arg Parser.

Running Program Client on Host

While the gRPC Python environment is already installed on the host as part of the DOCA installation, it has not been added to the default Python path so as to not clutter it. The environment variable definitions needed for using the gRPC Python environment, as needed by the client, are:

Copy
Copied!
            

export PYTHONPATH=${PYTHONPATH}:/opt/mellanox/grpc/python3/lib

To run the Python client of the gRPC-enabled application:

Copy
Copied!
            

doca_<program_name>_gRPC_client.py -d/--debug <server address[:server port]>


Installing gRPC on RHEL CentOS 7.6 and Older

On RHEL/CentOS distributions 7.6 and older for x64 host, there is a known issue in the python grpcio package which causes the following error:

Copy
Copied!
            

from grpc._cython import cygrpc as _cygrpc ImportError: /opt/mellanox/grpc/python3/lib/grpc/_cython/cygrpc.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZSt24__throw_out_of_range_fmtPKcz

To fix this error, please run the following commands on the host:

Copy
Copied!
            

rm -rf /opt/mellanox/grpc/python3/lib/grpc* wget https://files.pythonhosted.org/packages/67/3c/53cc28f04fb9bd3e8bad6fa603aa8b01fa399dd74601ab0991e6385dbfe4/grpcio-1.39.0-cp36-cp36m-manylinux2010_x86_64.whl -P /tmp && unzip /tmp/grpcio-1.39.0-cp36-cp36m-manylinux2010_x86_64.whl -d /opt/mellanox/grpc/python3/lib


A gRPC-enabled program must first be executed on BlueField for it to be managed from the host. This creates a bootstrapping issue that is solved by the DOCA gRPC Orchestrator, as can be seen in the following figure:

doca-grpc-service-diagram-version-1-modificationdate-1707421051417-api-v2.png

After a one-time configuration step, the DOCA gRPC Orchestrator daemons runs on BlueField, listening for incoming requests from the host to start/stop a given gRPC-enabled DOCA program.

Enabling and Configuring DOCA gRPC Orchestrator Daemon

The doca_grpc daemon on the DPU starts as "disabled" by default and has a one-time configuration step for enabling it:

Copy
Copied!
            

# One-time only, enable the DOCA gRPC Orchestrator Daemon systemctl enable doca_grpc.service # One-time only, start the daemon systemctl start doca_grpc.service

The daemon is controlled via a configuration file stored at /etc/doca_grpc/doca_grpc.conf.

Warning

Users are requested to update the configuration file so as to reflect the desired deployment parameters. Starting the daemon using the default file "as-is" does not invoke any server instance.

The configuration file defines the configurations for every DOCA gRPC server instance that the daemon spawns, alongside the list of programs it exposes to the host. Accordingly, the file comes prepopulated with a list of all gRPC-enabled DOCA programs and can be modified to support additional proprietary programs. For more information regarding the structure of the file, read the full instructions in the file itself: /etc/doca_grpc/doca_grpc.conf.

Once the configuration file is modified to suit the requested deployment, the daemon must be restarted so it could pull the new configuration:

Copy
Copied!
            

systemctl restart doca_grpc.service


Running DOCA gRPC Orchestrator Client

The DOCA gRPC Orchestrator client is located at /opt/mellanox/doca/infrastructure/doca_grpc/orchestrator.

Warning

Being a Python client, the same Python environment variable mentioned earlier for using the application's gRPC client is needed for this client as well.

The usage instructions for the DOCA gRPC client are:

Copy
Copied!
            

Usage: doca_grpc_client.py [OPTIONS] SERVER_ADDRESS COMMAND [ARGS]... DOCA gRPC Client CLI tool Options: -d, --debug --help Show this message and exit.   Commands: create Create PROGRAM_NAME [PROGRAM_ARGS]... destroy Destroy PROGRAM_UID Terminate the execution of the program... list List the names of gRPC-supported program.

For example:

Copy
Copied!
            

/opt/mellanox/doca/infrastructure/doca_grpc/orchestrator/doca_grpc_client.py 192.168.103.2:1234 list

The supported commands are:

  • list – prints a list of names of all DOCA gRPC-enabled programs currently supported

  • create – spawns a gRPC-enabled program on the DPU based on its name and arguments

  • destroy – terminates the execution of a gRPC-enabled program based on the program's UID as returned from the "create" command

The SERVER_ADDRESS argument is of the form <server address[:server port]> allowing the client to use a TCP port other than the default one if the server uses a proprietary port.

The command-specific options are shown when passing the --help flag to the respective command:

Copy
Copied!
            

/opt/mellanox/doca/infrastructure/doca_grpc/orchestrator/doca_grpc_client.py 192.168.103.2 destroy --help

The gRPC-specific command line arguments (listed under Running gRPC-Enabled Program ) are only needed when invoking the gRPC-enabled program directly. When invoked through the DOCA gRPC client, the arguments must match those of the "regular" DOCA program and the gRPC-specific commands should not be used.

Warning

The orchestrator client supports the option to spawn a gRPC-enabled program using a non-default port. This option is mandatory for programs that do not support a default gRPC port.


Running gRPC-Enabled DOCA Libs

All servers for gRPC-enabled DOCA libraries accept the following arguments:

Copy
Copied!
            

Usage: doca_<lib_name>_grpc [DPDK Flags] -- [DOCA Flags] DOCA Flags: ... -g, --grpc-address ip_address[:port] Set the IP address for the grpc server ...

Therefore, they should be invoked using the orchestrator client as follows:

Copy
Copied!
            

/opt/mellanox/doca/infrastructure/doca_grpc/orchestrator/doca_grpc_client.py 192.168.103.2 create doca_flow_grpc -a auxiliary:mlx5_core.sf.2,dv_flow_en=2 -a auxiliary:mlx5_core.sf.3,dv_flow_en=2 -- -l 60

Note

Note that the -g flag is not passed as it is taken care of automatically by the orchestrator client.


Running DOCA Program gRPC Client

Once the program's gRPC server is spawned on BlueField, you can connect to it directly from the host using the respective gRPC client.

If the program's gRPC server is configured to use a TCP port that is not the default port of the program, make sure to also configure the gRPC client to use the same non-default port.

© Copyright 2023, NVIDIA. Last updated on Feb 9, 2024.