DriveWorks SDK Reference
3.5.78 Release
For Test and Development only

Choosing the GPU for Execution
SW Release Applicability: This tutorial is applicable to modules in both NVIDIA DriveWorks and NVIDIA DRIVE Software releases.

NVIDIA DRIVE platforms has these GPUS on which NVIDIA® DriveWorks can run the NVIDIA® CUDA® workloads:

  • Integrated GPU (iGPU).
  • Discrete GPU (dGPU), which is not present on all DRIVE platforms.

The dGPU is faster than the iGPU but is limited to being a CUDA co- processor. It cannot run graphics code (OpenGL/OpenGLES).

There are two methods for ensuring the CUDA workload executes on iGPU.

Method 1: Copy the CUDA Images to the iGPU for Visualization

Use the DriveWorks context methods to select the GPU on which to run, i.e. dwStatus dwContext_selectGPUDevice(). If the chosen GPU is dGPU and if rendering is needed, use an image streamer to copy the results to the iGPU for visualization.

For details on how this is done, see the DriveWorks samples.

Method 2: Restrict GPU Enumeration

Set this environment variable: CUDA_VISIBLE_DEVICES=1. This setting limits CUDA applications to discovering (enumerating) only the iGPU.

If CUDA_VISIBLE_DEVICES is unset, the GPUs enumerate as follows:

  • 0 corresponds to the dGPU, if present.
  • 1 corresponds to the iGPU.

DriveWorks runs the CUDA workload on the first enumerated GPU, which is dGPU (if present).

For more information, see J. CUDA Environment Variables in CUDA C Programming Guide.