PVA SDK Samples#
The PVA SDK includes samples demonstrating how to write complete applications using the cuPVA runtime APIs. The samples also demonstrate how to build PVA applications using the build tools which are included with the PVA SDK.
Overview#
Sample applications are installed in /opt/nvidia/pva-sdk-2.7/samples
, with each sample
in a separate subdirectory.
Executing some of the samples requires assets. For convenience, sample assets are provided in
the directory /opt/nvidia/pva-sdk-2.7/samples/assets
. The assets directory should be
deployed with the sample, and the path specified with --assets <PATH_TO_ASSETS>
.
Note
The assets path for tutorials is /opt/nvidia/pva-sdk-2.7/samples/tutorial/assets
. The majority of
tutorials require data from this directory. Refer to Step-by-Step Tutorials
for more information.
The following samples are present:
Sample |
Requires –assets <PATH_TO_ASSETS>? |
Description |
c_api_interop |
No |
Shows how to convert cuPVA C++ API objects to C API objects. This is convenient for applications where the definition of the PVA workload can benefit from the cuPVA C++ APIs, but it is desirable to provide a pure C interface to an external component to handle scheduling of those workloads. |
convolution_cb |
No |
Shows how to construct and submit simple 2D filter workload with cuPVA APIs. |
delay |
No |
Shows a workload which spins a VPU in a busy-loop for a set number of cycles. |
device_test |
No |
Shows usage of the PVA SDK’s device function unit testing framework. This enables test applications to call device functions directly without needing to use cuPVA host APIs to explicitly schedule a workload. |
fft |
No |
Shows how to utilize the PVA for finding discrete fourier transforms. |
image_pyramid |
No |
Shows how to compute a ‘pyramid’ of successively lower resolution output images from a given input image. |
mat_add |
No |
Shows how to add two 2D arrays and return the result. |
mat_transpose |
No |
Shows how to transpose matrices. The DMA engine transposes tiles within an image, while the VPU transposes each pixel within the tile. |
nvmedia_interop |
No |
Shows how to construct a zero-copy pipeline involving NvMedia and PVA. |
pipeline |
No |
Shows how to utilize cuPVA APIs for scheduling pipelines with complex dependencies across multiple VPUs. |
planar_to_interleaved4 |
No |
Shows how to use |
tone_mapping |
Yes |
Shows how to use the |
tutorial |
No |
Contains all of the code used by the Tutorials. |
vpu_printf |
No |
Shows how to use the |
warp_gsdf |
Yes |
Shows how to use |
Building Samples#
Environment Setup#
Most users do not have write access to the default sample installation directory, so it can be difficult to use for building or experimenting with sample modifications. For convenience, PVA SDK includes a simple script that copies the samples to a directory where you have write access.
To copy the samples to your home directory, enter the following command on a terminal:
/opt/nvidia/pva-sdk-2.7/bin/pva-sdk_install_samples.sh $HOME
This copies the samples to the directory $HOME/NVIDIA_PVA_SDK-2.7.1-samples
.
Before building, make sure you have followed the instructions for installing pre-requisites: Install Prerequisites
Some of the samples/tutorials distributed with the PVA SDK have dependencies on NvSci
and/or NvMedia
. The following environment variables
should be set to allow the build scripts to find these dependencies. An example is also provided assuming DRIVE OS Docker container.
If these variables are not provided, building of these samples/tutorials is skipped.
Environment Variable |
Example for DRIVE OS 7.0.1.0 docker container |
---|---|
NVMEDIA_INCLUDE_PATH |
/drive/drive-linux/include/nvmedia_6x |
NVMEDIA_LIBRARY_PATH |
/drive/drive-linux/lib-target |
NVSCI_INCLUDE_PATH |
/drive/drive-linux/include |
NVSCIBUF_LIBRARY_PATH |
/drive/drive-linux/lib-target |
NVSCISYNC_LIBRARY_PATH |
/drive/drive-linux/lib-target |
Note
Samples and tutorials distributed with the PVA SDK use CMake to locate CUDA, which may require nvcc
to be present in your PATH
environment
variable. If CMake is unable to locate an appropriate version of the CUDA toolkit, you may need export PATH=$PATH:/usr/local/cuda/bin
or similar.
Building for Native Model#
The PVA SDK includes support for a model of the PVA which runs on the host x86 machine, known as the native model. The native model allows users to build and test the functionality of their application in their development environment without deploying to the target. Use of the native model does not require users to purchase ASIP Programmer licenses.
To build samples in native mode, navigate to a sample directory (to build a single sample) or from the top-level samples directory (to build all samples), and follow the instructions in the Native model documentation.
Linux Platforms#
The below instructions are applicable when building for DRIVE OS Linux or Jetson. An environment variable is required to be set to for ASIP licensing information.
The build can be performed from within a sample directory (to build a single sample) or from the top level samples directory (to build all samples):
export SNPSLMD_LICENSE_FILE=<TCP_PORT>@<LICENSE_SERVER>
mkdir build_l4t && cd build_l4t
cmake -DPVA_BUILD_MODE=L4T ..
make
Note
Depending on your environment, some changes may be necessary to the above:
If using a node-locked license,
SNPSLMD_LICENSE_FILE
may be omitted.As an alternative to
SNPSLMD_LICENSE_FILE
, user can specifyLM_LICENSE_FILE
. Consult FlexLM documentation for more information.If not using the ASIP Programmer packages provided by NVIDIA,
PVA_GEN<X>_ASIP_PATH
environment variables should be set for each PVA generation.
QNX Platforms#
The below instructions are applicable when building for DRIVE OS QNX Standard or Safety. An environment variable is required to be set to for ASIP licensing information.
Note
Builds use a non-safety configuration by default. To link to the safety versions of cuPVA libraries intended for
safety-certified DRIVE OS installations, add the cmake flag -DPVA_SAFETY=ON
.
The procedure is similar to building for Linux platforms; however, some QNX toolchain environment variables are also required:
# QNX_HOST/QNX_TARGET can alternatively be set by standard scripts usually packaged with QCC
export QNX_HOST=<QNX_BASE>/host/linux/x86_64
export QNX_TARGET=<QNX_BASE>/target/qnx
export SNPSLMD_LICENSE_FILE=<TCP_PORT>@<LICENSE_SERVER>
mkdir build_qnx && cd build_qnx
cmake -DPVA_BUILD_MODE=QNX ..
make
Note
Depending on your environment, some changes may be necessary to the above:
If using a node-locked license,
SNPSLMD_LICENSE_FILE
may be omitted.As an alternative to
SNPSLMD_LICENSE_FILE
, the user can specifyLM_LICENSE_FILE
. Consult FlexLM documentation for more information.If not using the ASIP Programmer packages provided by NVIDIA,
PVA_GEN<X>_ASIP_PATH
environment variables should be set for each PVA generation.
Deploying Samples#
Refer to deploying applications for information on how to deploy built applications to the SOC.