NVIDIA DRIVE OS Linux SDK API Reference Release
For Test and Development only
GPU access API: Device management (safety subset)

Detailed Description

Data Structures

struct  NvRmGpuDeviceOpenAttrRec
 Extensible attribute structure for NvRmGpuDeviceOpen() More...


 Pseudo-index for the default (primary) device. More...
#define NVRM_GPU_DEFINE_DEVICE_OPEN_ATTR(x)   NvRmGpuDeviceOpenAttr x = { NvRmGpuSyncType_Default, false }
 Definer macro for NvRmGpuDeviceOpen(). More...


typedef struct NvRmGpuDeviceRec NvRmGpuDevice
 Device handle. More...
typedef struct NvRmGpuDeviceOpenAttrRec NvRmGpuDeviceOpenAttr
 Extensible attribute structure for NvRmGpuDeviceOpen() More...


enum  NvRmGpuSyncType {
 Inter-engine synchronization type for GPU jobs. More...


NvError NvRmGpuDeviceOpen (NvRmGpuLib *hLib, int deviceIndex, const NvRmGpuDeviceOpenAttr *attr, NvRmGpuDevice **phDevice)
 Opens a GPU device. More...
NvError NvRmGpuDeviceClose (NvRmGpuDevice *hDevice)
 Closes the GPU device. More...

Macro Definition Documentation


#define NVRM_GPU_DEFINE_DEVICE_OPEN_ATTR (   x)    NvRmGpuDeviceOpenAttr x = { NvRmGpuSyncType_Default, false }

Definer macro for NvRmGpuDeviceOpen().

This macro defines a variable of type NvRmGpuDeviceOpenAttr with the default values.

See also

ASIL-B Operational mode: Init

Definition at line 506 of file nvrm_gpu.h.



Pseudo-index for the default (primary) device.

By default, this is the first GPU enumerated by NvRmGpuLibListDevices(). This can be overridden with environment variable NVRM_GPU_DEFAULT_DEVICE_INDEX.
See also


Definition at line 398 of file nvrm_gpu.h.

Typedef Documentation

◆ NvRmGpuDevice

typedef struct NvRmGpuDeviceRec NvRmGpuDevice

Device handle.

See also


Definition at line 118 of file nvrm_gpu.h.

◆ NvRmGpuDeviceOpenAttr

Extensible attribute structure for NvRmGpuDeviceOpen()

Use NVRM_GPU_DEFINE_DEVICE_OPEN_ATTR() to define the attribute variable with defaults.
See also


Enumeration Type Documentation

◆ NvRmGpuSyncType

Inter-engine synchronization type for GPU jobs.

The usual GPU channels (also known as KMD kickoff channels) support attaching pre-fences and post-fences with job submission. The pre-fence is a synchronization condition that must be met before the job execution can begin. Respectively, the post-fence is a synchronization condition that will be triggered after the job execution has completed. This allows a larger task to be split into parts where GPU and other engines seamlessly process the data in multiple stages. For example:

  • camera produces a frame
  • GPU processes the frame after camera has produced it
  • Display controller displays the frame after GPU has processed it

Depending on the operating system and HW capabilities, different synchronization object types are available:

  • Tegra HOST 1X syncpoint — Syncpoint is a hardware register provided by the SoC (generally, 32-bit integer with wrap-around safe semantics). Pre-sync condition waits until the syncpoint value reaches a threshold, and post-sync condition increases the syncpoint value.
  • Android/Linux sync fd — Synchronization fence backed up by a file (sync_file). The synchronization fence has two stages: untriggered (initial state) and triggered. Pre-sync condition always waits for the fence to become triggered, and post-sync condition triggers the fence.
This is not to be confused with GPU semaphores. GPU semaphores are usually used to synchronize jobs that are executed within a single device, or between multiple GPUs, or sometimes between the GPU and the CPU. GPU semaphore is simply a memory location with semantics similar to Tegra HOST1X syncpoints. Generally, waiters wait until the value at the memory location reaches a specific threshold, and waiters are released by setting the semaphore to the threshold value or above (but there are other modes, too.)
See also



Default sync type.

@remark Depending on the context, this is platform default,
device-default, or channel-default.

@sa NvRmGpuDeviceInfo::defaultSyncType
@sa NvRmGpuChannelInfo::syncType 

Synchronization type is Android/Linux sync fd.


Synchronization type is Tegra HOST1X syncpoint.

Definition at line 439 of file nvrm_gpu.h.

Function Documentation

◆ NvRmGpuDeviceClose()

NvError NvRmGpuDeviceClose ( NvRmGpuDevice hDevice)

Closes the GPU device.

[in]hDeviceDevice handle to close. May be NULL.
The usual NvError code
Return values
NvSuccessDevice closed and all related resources released successfully, or device handle was NULL.
NvError_*Unspecified error. Device handle is closed, but some resources may be left unreleased. Error code is returned only for diagnostic purposes.
Every resource attached to the device must be closed before closing the device to avoid leaks and dangling pointers. In debug builds, nvrm_gpu will keep track of the associated resource and it will assert in case this contract is violated.
See also
NvRmGpuAddressSpaceClose(), NvRmGpuChannelClose(), NvRmGpuCtxSwTraceClose(), NvRmGpuTaskSchedulingGroupClose(), NvRmGpuRegOpsSessionClose(), NvRmGpuDeviceEventSessionClose()

ASIL-B Operational mode: De-Init

◆ NvRmGpuDeviceOpen()

NvError NvRmGpuDeviceOpen ( NvRmGpuLib hLib,
int  deviceIndex,
const NvRmGpuDeviceOpenAttr attr,
NvRmGpuDevice **  phDevice 

Opens a GPU device.

[in]hLibLibrary handle
[in]deviceIndexDevice index (NvRmGpuLibDeviceListEntry::deviceIndex) or NVRM_GPU_DEVICE_INDEX_DEFAULT for the default device.
[in]attrPointer to device open attributes or NULL for defaults.
[out]phDevicePointer to receive the device handle.
The usual NvError code
Return values
NvSuccessDevice opened successfully.
NvError_BadValueBad device index
NvError_DeviceNotFoundDevice node not found
NvError_AccessDeniedNot enough privileges to access the device
NvError_*Unspecified error. Error code returned for diagnostic purposes.
Only attached GPUs can be opened. See NvRmGpuLibDeviceListEntry::deviceState.
See NVRM_GPU_DEVICE_INDEX_DEFAULT for the discussion on default device.
See also
NvRmGpuLibAttachDevice(), NvRmGpuLibDetachDevice()

ASIL-B Operational mode: Init