Deploying Applications#
To deploy your application to a supported target, first ensure that you have flashed the SOC with a compatible BSP image. See release notes for supported operating systems.
It is also recommended to review the release notes and PVA documentation which is included with your operating system.
Deploying the cuPVA Runtime#
PVA applications require the cuPVA host runtime. The host runtime can be linked statically or dynamically. If it is linked dynamically, the runtime needs to be deployed along with your application. The cuPVA host dynamic runtime makes use of symlinks to select the appropriate version. cuPVA host dynamic runtimes are ABI compatible for the same major.minor version. Therefore, you should always choose to deploy the latest available version of the cuPVA host runtime matching the major.minor release which you used to build your application.
For some platforms, the cuPVA runtime comes in safety and non-safety variants. Be sure to deploy the correct runtime for your build configuration.
Deploying the cuPVA Runtime on L4T#
On L4T targets, the cuPVA host dynamic runtime can be installed with:
sudo apt install pva-sdk-2.9-l4t
See installation for more details. The appropriate .deb file can also be extracted from the local installer and redistributed with your application.
Note
Installing the pva-sdk-2.9-l4t package also runs ldconfig
so that the cuPVA runtime can be found system-wide without using LD_LIBRARY_PATH or similar.
Deploying the cuPVA Runtime on Automotive Platforms with Read-Only Filesystems#
On targets with read-only filesystems, the cuPVA host runtime may need to be deployed to the filesystem prior to flashing the board. For example, in DRIVE OS, users typically modify the filesystem using the DRIVE OS SDK/PDK. Refer to the SDK/PDK documentation for details about how to deploy binaries and files to your chosen operating system.
The cuPVA runtime libraries are installed with the PVA SDK on the host under /opt/nvidia/pva-sdk-2.9/lib/<arch>.
Proper filesystem access and ownership permissions should be set as required for a library which will be loaded by a user’s application,
and the libraries must be placed at the correct filesystem paths. Symlinks should also be deployed to ensure correct operation.
The dynamic runtime library and symlinks must be in a path recognized by the dynamic loader. This can be achieved by configuring
your system (for example, by using ldconfig on Linux platforms) or by setting the LD_LIBRARY_PATH environment variable when
launching a cuPVA application.
Note
The cuPVA host utility library is not necessary to build and run PVA applications, but does provide some additional data logging and debug utilities, beyond what is available in the host runtime library. Some samples use this library as a demonstration.
Example: Deploying the cuPVA Runtime to DRIVE OS QNX safety guest VM#
This section describes the steps for deploying the cuPVA host runtime on DRIVE OS QNX safety guest VM filesystem using DRIVE OS 6.5.4.1. The steps may vary depending on the chosen deployment configuration and DRIVE OS version. Refer to the DRIVE OS documentation for your platform for more details.
Prerequisites#
Ensure that DRIVE OS is installed following the DRIVE OS documentation.
Ensure that you can re-build the secondary IFS without any modifications following the DRIVE OS documentation under “Building the Secondary IFS Using Build-FS”
Ensure that the PVA SDK is installed following Installation. As these instructions target QNX safety, ensure that the pva-sdk-2.9-qnx-safety-dev package is installed for PVA SDK.
Deploying the cuPVA host runtime#
Create a CopyTarget configuration describing how you wish to deploy the cuPVA host runtime. The following example shows how to deploy the cuPVA host safety runtime to the guest safety filesystem, while creating appropriate symlinks. This should be placed at
<DRIVEOS_SDK_PATH>/drive-qnx-safety/filesystem/copytarget/manifest/copytarget-sifs-cupva.yaml.copytarget-sifs-cupva.yaml#version: 1.4.10 fileList: - destination: /usr/libnvidia/libcupva_host_safety.so.2.9.0 source: pdk_sdk_installed_path: /opt/nvidia/pva-sdk-2.9/lib/aarch64-qnx710/libcupva_host_safety.so.2.9.0 perm: 555 owner: root group: root filesystems: guest_safety: required: yes jama: PVA60-REQ-1046 customizable: no asil: B purpose: cuPVA host runtime library. element: PVA - destination: /usr/libnvidia/libcupva_host_safety.so.2.9 source: pdk_sdk_installed_path: /usr/libnvidia/libcupva_host_safety.so.2.9.0 perm: 555 owner: root group: root create_symlink: true filesystems: guest_safety: required: yes jama: PVA60-REQ-1046 customizable: no asil: B purpose: cuPVA host runtime library. element: PVA
Note
Many of the samples and tutorials distributed with the PVA SDK also depend on libcupva_host_utils, which is not a safety qualified library and should not be deployed in safety critical configurations. For development purposes, libcupva_host_utils can be deployed in a debug overlay. Refer to the DRIVE OS documentation for more details.
Add the CopyTarget yaml to the desired configuration. This can be done by adding it to the “CopyTargets” array in the file
<DRIVEOS_SDK_PATH>/drive-qnx-safety/filesystem/build-fs/configs/guest_vm_secondary_safety.CONFIG.json.guest_vm_secondary_safety.CONFIG.json (excerpt)#"CopyTargets": [ "${IFS_COPYTARGET_DIR}/guest_vm_secondary_safety.yaml", "${IFS_COPYTARGET_DIR}/copytarget-qnx-fs-with-vksc.yaml", "${IFS_COPYTARGET_DIR}/copytarget-sifs-trt.yaml", "${IFS_COPYTARGET_DIR}/copytarget-sifs-cupva.yaml" ],
Rebuild the secondary IFS following the DRIVE OS documentation using the
qnx_create_ifs.shtool.Bind partitions following the DRIVE OS documentation.
Put the target into recovery mode and flash using the bootburn tool according to the DRIVE OS documentation.
Deploying Your Application#
After deploying the cuPVA host runtime, your application can now be deployed by installing it on the target device.
In addition to deploying your application, you may need to deploy an allowlist. See vpu_allowlist for more information.