Resource#
APIs to create and manage resources used by other cuPVA APIs.
One of the primary resources used by the cuPVA APIs is device pointers. Device pointers represent memory which may be accessed by the PVA’s DMA engine. Device pointers are returned by the cuPVA allocation and mapping APIs.
Device pointers behave like regular pointers in many ways. The user may perform pointer math on device pointers. If the math does not take the pointer out of bounds of the buffer it represents, the resulting pointer will also be a valid device pointer. However, the user may not dereference device pointers. Accessing such addresses will cause a memory fault. To access the corresponding memory on CCPLEX, user may convert the device pointer to a host mapped pointer (valid only for memories accessible by CCPLEX). To access the memory on VPU, user must use the DMA engine. To retrieve a corresponding IOVA for use with cuPVA device APIs, user should assign the device pointer to a parameter.
Enumerations#
- cupvaBufferType_t
Specifies the host-memory-buffer type.
- cupvaL2SRAMPolicyType_t
Specifies the L2SRAM policy type.
- cupvaMemAccessType_t
Specify the host-allocated memory permission.
- cupvaMemAllocType_t
Specifies the host-memory-allocation type.
- cupvaMemExternalAllocType_t
Specify the memory types that can be registered with CUPVA.
- cupvaMemType_t
Specify the memory types.
- cupvaSurfaceFormatType_t
Enumeration of data storage format.
Functions#
- cupvaError_t CupvaExecutableCreate(cupvaExecutable_t *const exec, const void *const data, int32_t const size)
Create a new Executable object from a pre-loaded binary buffer.
- cupvaError_t CupvaExecutableDestroy(cupvaExecutable_t const exec)
Destroy executable object.
- cupvaError_t CupvaImportFromHostPtr(void **const devPtr, void *const hostPtr, int64_t const size, cupvaMemAccessType_t const accessType)
Create a PVA pointer from a libc allocated CPU host pointer (L4T only API).
- cupvaError_t CupvaMapL2(void **const l2Ptr, void *const dramPtr, int64_t const size, cupvaL2SRAMPolicyType_t const policy)
Creates a device pointer for an L2SRAM buffer of given size with an optional DRAM buffer backing for persistence and associated cache policy.
- cupvaError_t CupvaMemAlloc(void **const devPtr, int64_t const size, cupvaMemAccessType_t const accessType, cupvaMemAllocType_t const allocType)
Allocate memory that is accessible by PVA engine.
- cupvaError_t CupvaMemConvertToGeometry(cupvaSurfaceAttributes_t const *attr, int32_t const bpp, struct PlanarGeometry *const geom)
Convert the surface attributes to planar geometry.
- cupvaError_t CupvaMemFree(void *const devicePtr)
Free the allocated memory.
- cupvaError_t CupvaMemGetHostPointer(void **const hostPtr, void *const devicePtr)
Get the host CPU mapped pointer for the given PVA pointer.
- cupvaError_t CupvaMemGetL2BaseAddress(void **const L2BaseAddrPtr)
Request a device pointer to the L2SRAM.
- cupvaError_t CupvaMemGetPointerAttributes(void *const devicePtr, cupvaPointerAttributes_t *const attr)
Query the pointer attributes for the given PVA pointer.
- cupvaError_t CupvaMemGetSurfaceAttributes(void *const devicePtr, cupvaSurfaceAttributes_t *const attr)
Query the surface attributes for the given PVA pointer.
- cupvaError_t CupvaMemRegister(const void *const ptr, int64_t const size, cupvaMemExternalAllocType_t const externalAllocType)
Register a pointer to CUPVA space.
- cupvaError_t CupvaMemUnregister(const void *const ptr)
Unregisters a pointer from CUPVA space.
- cupvaError_t CupvaSetVPUPrintBufferSize(uint32_t const size)
Set the size of VPU print buffer for the current context.
Data Structures#
- cupvaPlaneInfo_t
Plane info.
- cupvaPointerAttributes_t
Pointer attributes.
- cupvaSurfaceAttributes_t
Surface attributes.
Typedefs#
- cupvaExecutable_t
A handle to a VPU binary.
Enumerations#
-
enum cupvaBufferType_t#
Specifies the host-memory-buffer type.
Values:
-
enumerator CUPVA_BUFFER_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_SURFACE#
Buffer with surface metadata
-
enumerator CUPVA_RAW#
Raw buffer without surface metadata
-
enumerator CUPVA_BUFFER_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_BUFFER_TYPE_INVALID#
-
enum cupvaL2SRAMPolicyType_t#
Specifies the L2SRAM policy type.
Values:
-
enumerator CUPVA_L2SRAM_POLICY_INVALID#
Sentinel value
-
enumerator CUPVA_L2SRAM_POLICY_FILL#
FILL policy
-
enumerator CUPVA_L2SRAM_POLICY_FLUSH#
FLUSH policy
-
enumerator CUPVA_L2SRAM_POLICY_FILL_AND_FLUSH#
FILL_AND_FLUSH policy
-
enumerator CUPVA_L2SRAM_POLICY_MAX#
One greater than max valid value
-
enumerator CUPVA_L2SRAM_POLICY_INVALID#
-
enum cupvaMemAccessType_t#
Specify the host-allocated memory permission.
Values:
-
enumerator CUPVA_MEM_ACCESS_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_READ#
read permission
-
enumerator CUPVA_WRITE#
write permission
-
enumerator CUPVA_READ_WRITE#
read-write permission
-
enumerator CUPVA_MEM_ACCESS_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_MEM_ACCESS_TYPE_INVALID#
-
enum cupvaMemAllocType_t#
Specifies the host-memory-allocation type.
Values:
-
enumerator CUPVA_MEM_ALLOC_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_ALLOC_DRAM#
host-memory-allocation type for DRAM buffer
-
enumerator CUPVA_ALLOC_CVSRAM#
host-memory-allocation type for CVSRAM buffer. Not supported in this release, silently maps to DRAM.
-
enumerator CUPVA_MEM_ALLOC_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_MEM_ALLOC_TYPE_INVALID#
-
enum cupvaMemExternalAllocType_t#
Specify the memory types that can be registered with CUPVA.
Values:
-
enumerator CUPVA_EXTERNAL_ALLOC_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_EXTERNAL_ALLOC_TYPE_CUDA#
CUDA device pointer
-
enumerator CUPVA_EXTERNAL_ALLOC_TYPE_HOST#
Host pointer
-
enumerator CUPVA_EXTERNAL_ALLOC_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_EXTERNAL_ALLOC_TYPE_INVALID#
-
enum cupvaMemType_t#
Specify the memory types.
Values:
-
enumerator CUPVA_MEM_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_DRAM#
MemType - Dynamic RAM
-
enumerator CUPVA_SRAM#
MemType - Static RAM
-
enumerator CUPVA_VMEM#
MemType - VPU Memory
-
enumerator CUPVA_MEM_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_MEM_TYPE_INVALID#
-
enum cupvaSurfaceFormatType_t#
Enumeration of data storage format.
Values:
-
enumerator CUPVA_SURF_FMT_TYPE_INVALID#
Sentinel value
-
enumerator CUPVA_PITCH_LINEAR#
pitch-linear format of data
-
enumerator CUPVA_BLOCK_LINEAR#
Block-linear-typed data can only be stored in DRAM.
-
enumerator CUPVA_SURF_FMT_TYPE_MAX#
One greater than max valid value
-
enumerator CUPVA_SURF_FMT_TYPE_INVALID#
Functions#
- cupvaError_t CupvaExecutableCreate(
- cupvaExecutable_t *const exec,
- const void *const data,
- int32_t const size,
Create a new Executable object from a pre-loaded binary buffer.
cupvaExecutable_t handles are used to register VPU binaries with the PVA system. The cupvaExecutable_t must not be destroyed until all workloads using it have completed.
Usage considerations
Allowed context for the API call
Thread-safe: No
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
exec – [out] The pointer to return the newly constructed Executable object.
data – [in] The host pointer to the buffer holding the VPU binary. Should be 4-byte aligned.
size – [in] The buffer size in bytes.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the following:
size <= 0.
data is nullptr
data is not correctly aligned
data cannot be parsed
CUPVA_INCOMPATIBLE_VERSION if Executable was built with an incompatible version of CUPVA
CUPVA_DRIVER_API_ERROR if driver returned error due to one of the following:
Internal driver memory allocation failed.
Parsing executable object failed.
PVA engine was in a bad state.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
-
cupvaError_t CupvaExecutableDestroy(cupvaExecutable_t const exec)#
Destroy executable object.
Usage considerations
Allowed context for the API call
Thread-safe: No
API group
Init: No
Runtime: No
De-Init: Yes
- Parameters:
exec – [in] The pointer to the Executable object.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT if exec was a NULL pointer.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaImportFromHostPtr(
- void **const devPtr,
- void *const hostPtr,
- int64_t const size,
- cupvaMemAccessType_t const accessType,
Create a PVA pointer from a libc allocated CPU host pointer (L4T only API).
This API is available on L4T only, and is not available in safety builds.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
devPtr – [out] The double pointer to return the device pointer to the imported memory.
hostPtr – [in] The input host pointer allocated by CPU user code.
size – [in] Specifies the memory size in byte.
accessType – [in] Specifies the access type of the mapped memory.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT if size or accessType was invalid.
CUPVA_INTERNAL_ERROR if VA allocation failed.
CUPVA_DRIVER_API_ERROR if Getting a device pointer failed due to driver API error.
CUPVA_UNSUPPORTED_FEATURE if The current platform does not support the feature.
- cupvaError_t CupvaMapL2(
- void **const l2Ptr,
- void *const dramPtr,
- int64_t const size,
- cupvaL2SRAMPolicyType_t const policy,
Creates a device pointer for an L2SRAM buffer of given size with an optional DRAM buffer backing for persistence and associated cache policy.
This function creates a device pointer for an L2SRAM buffer of given size with an optional DRAM buffer backing for persistence and associated cache policy. The created L2SRAM device pointer can be used like any other device pointer. It can serve as the base address for an OffsetPointer, or raw pointer when configuring DataFlows.
If a DRAM buffer is provided, it will be used as the backing for the L2SRAM buffer. It can be used to FILL or FLUSH the L2SRAM buffer depending on the specified policy. The same DRAM buffer pointer can be used to create multiple pointers to the same L2SRAM buffer with different cache policies. Please note that when configuring a single CmdProgram or CmdMemcpy operation, you can only use one mapped L2SRAM pointer at a time.
Mapping should be unmapped with CupvaMemFree when the L2SRAM buffer is no longer needed. If specified, backing DRAM buffer should not be freed until all L2SRAM pointers mapped to it are unmapped.
Legacy CupvaCmdProgramSetL2Size() API calls will not be honored if mapped L2SRAM pointers are used for a CmdProgram or CmdMemcpy.
Please refer to cupva::mem::L2SRAMPolicyType for more details and usage examples.
MapL2 API requires driver version >= 2007.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
l2Ptr – [out] The double pointer to return the L2SRAM pointer.
dramPtr – [in] The DRAM device pointer to map to L2 cache. If dramPtr is nullptr, the L2SRAM will not have DRAM backing and policy will be ignored.
size – [in] The size of the memory region to map. Should be larger than 0.
policy – [in] The access policy for the L2 cache mapping (FILL, FLUSH, or FILL_AND_FLUSH).
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT if l2Ptr is NULL.
CUPVA_DRIVER_API_ERROR if mapping failed due to driver API error.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaMemAlloc(
- void **const devPtr,
- int64_t const size,
- cupvaMemAccessType_t const accessType,
- cupvaMemAllocType_t const allocType,
Allocate memory that is accessible by PVA engine.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
devPtr – [out] The double pointer to return the device pointer to newly allocated memory.
size – [in] The required memory size in byte.
accessType – [in] Specifies the access type of the allocated memory.
allocType – [in] allocType Specifies the allocating memory type. In Xavier, the user can request a memory in DRAM or CVSRAM (on-chip sram). In Orin, CVSRAM is not existing, the request to CVSRAM goes to DRAM.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the followings:
devPtr was a NULL pointer.
Allocation size or access type was invalid.
CUPVA_INTERNAL_ERROR if VA allocation failed.
CUPVA_DRIVER_API_ERROR if Allocation failed due to driver API error.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
CUPVA_INVALID_ARGUMENT if accessType is not CUPVA_READ_WRITE
- cupvaError_t CupvaMemConvertToGeometry(
- cupvaSurfaceAttributes_t const *attr,
- int32_t const bpp,
- struct PlanarGeometry *const geom,
Convert the surface attributes to planar geometry.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
attr – [in] The pointer to a cupvaSurfaceAttributes_t struct.
bpp – [in] The bytes per pixel.
geom – [out] The pointer to a PlanarGeometry struct.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the followings:
attr was a null pointer.
geom was a null pointer.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”
-
cupvaError_t CupvaMemFree(void *const devicePtr)#
Free the allocated memory.
Usage considerations
Allowed context for the API call
Thread-safe: No
API group
Init: No
Runtime: No
De-Init: Yes
- Parameters:
devicePtr – [in] The device pointer pointing to the allocated memory.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT if devicePtr was invalid.
CUPVA_DRIVER_API_ERROR if Deallocation failed due to driver API error.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaMemGetHostPointer(
- void **const hostPtr,
- void *const devicePtr,
Get the host CPU mapped pointer for the given PVA pointer.
Returns a pointer to host mapped device memory allocated by CupvaMemAlloc, imported by CupvaImportFromHostPtr, imported by CupvaMemImport functions or registered CUDA device pointers. The memory is automatically unmapped when the underlying device memory is freed.
Not all memories are mappable to CPU pointers:
CupvaMemAlloc and CupvaImportFromHostPtr memories are always mappable.
Mapping CUDA memory to host depends on CUDA’s access policies. Under current policies, pinned host memory (e.g. cudaMallocHost) is mappable. GPU cached device memory (e.g. cudaMalloc) is not.
For legacy reasons, this API supports obtaining a CPU mapped host pointer to device pointers imported from NvSciBuf using CupvaMemImport, if the internal allocation allows CPU mapping of the buffer. This behavior is deprecated and will be removed in a future release. Users should instead use NvSciBuf APIs directly to map CPU host pointers for NvSciBuf allocations.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
hostPtr – [out] The double pointer to return the mapped host pointer which can be accessed by CPU.
devicePtr – [in] The input device pointer which can only be accessed by PVA.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the followings:
hostPtr was a NULL pointer.
devicePtr value or type was invalid.
devicePtr is not mappable.
CUPVA_DRIVER_API_ERROR if getting CPU mapped pointer failed due to driver API error.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”
-
cupvaError_t CupvaMemGetL2BaseAddress(void **const L2BaseAddrPtr)#
Request a device pointer to the L2SRAM.
Pointers returned by this API will always compare as equal. However each CmdProgram has its own L2 allocation which is declared via CupvaCmdProgramSetL2Size(). The pointer returned by this API is a symbolic representation of any CmdProgram’s L2SRAM allocation.
The L2SRAM allocations of separate CmdPrograms can be made to alias and be persistent. Refer to CupvaCmdProgramSetL2Size for details.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
L2BaseAddrPtr – [out] L2SRAM device pointer, which can be used to access a CmdProgram’s L2SRAM allocation.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT if L2BaseAddrPtr was a NULL pointer.
CUPVA_INTERNAL_ERROR if failed to initialize internal state.
CUPVA_DRIVER_API_ERROR if The PVA driver returned an unexpected error.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaMemGetPointerAttributes(
- void *const devicePtr,
- cupvaPointerAttributes_t *const attr,
Query the pointer attributes for the given PVA pointer.
Host pointer associated with the given device pointer is populated only when the device pointer is mappable. Otherwise it is set to NULL. See CupvaMemGetHostPointer for details.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
devicePtr – [in] devicePtr The input device pointer which can only be accessed by PVA.
attr – [out] The pointer to a cupvaPointerAttributes_t struct.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the followings:
attr was a NULL pointer.
devicePtr value or type was invalid.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaMemGetSurfaceAttributes(
- void *const devicePtr,
- cupvaSurfaceAttributes_t *const attr,
Query the surface attributes for the given PVA pointer.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
devicePtr – [in] The input device pointer which can only be accessed by PVA.
attr – [out] The pointer to a cupvaSurfaceAttributes_t struct.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the followings:
devicePtr value or type was invalid.
attr was a null pointer.
incorrect surface format type returned by underlying APIs.
incorrect number of planes returned by underlying APIs.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”
- cupvaError_t CupvaMemRegister(
- const void *const ptr,
- int64_t const size,
- cupvaMemExternalAllocType_t const externalAllocType,
Register a pointer to CUPVA space.
Registering a CUDA or HOST device pointer maps the pointer in CUPVA space which must be unregistered by calling CupvaMemUnregister() when the application has finished using the buffer.
User should never register an address range overlapping an address range which has already been registered explicitly. Doing so is undefined behavior.
User should call CupvaMemUnregister for each explicitly registered address range before calling free (e.g. cudaMemFree or OS free()) on the underlying buffer. It is undefined what happens if user does not follow this.
When using driver versions >=2006, except for QNX Safety :
CUDA device pointers will automatically be registered/unregistered with CUPVA if CUDA interop is enabled.
Calling CupvaMemRegister/CupvaMemUnregister on CUDA device pointers is not necessary. These APIs will have no effect in this case and will not generate an error in order to preserve compatibility of user code.
HOST register is not currently supported.
Usage considerations
Allowed context for the API call
Thread-safe: Yes
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
ptr – [in] The pointer to import, allocated according to externalAllocType.
size – [in] The size of the allocation in bytes.
externalAllocType – [in] Type of memory being registered.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the following:
ptr was a NULL pointer.
size was invalid.
If user attempts to register ptr memory region overlapping with an existing registered memory region. This error does not apply to CUDA device pointers with driver version >=2006, except for QNX Safety. In such cases, allocations made via CUDA will automatically be registered and unregistered with CUPVA - it is not necessary to call this API or CupvaMemUnregister explicitly before using the CUDA device pointer with CUPVA. In order to preserve compatibility, this API will return success even if the registered CUDA device pointer aliases with a CUDA device pointer region already registered, either via this API or automatically by the driver.
CUPVA_INTERNAL_ERROR if storing CUDA pointer mapping failed.
CUPVA_DRIVER_API_ERROR if Getting a device pointer failed due to driver API error. This error does not apply to CUDA device pointers with driver version >=2006, except for QNX Safety.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
CUPVA_CUDA_DISABLED if CUDA interop has been disabled for this process
CUPVA_UNSUPPORTED_FEATURE if memory being registered if of type HOST.
-
cupvaError_t CupvaMemUnregister(const void *const ptr)#
Unregisters a pointer from CUPVA space.
This API should be used only to unregister CUDA or HOST device pointers registered using CupvaMemRegister. In safety critical systems, this API must be called only during deinit time.
User should call CupvaMemUnregister for each explicitly registered address range before calling free (e.g. cudaMemFree or OS free()) on the underlying buffer. It is undefined what happens if user does not follow this.
When using driver versions >=2006, except for QNX Safety :
CUDA device pointers will automatically be registered/unregistered with CUPVA if CUDA interop is enabled.
Calling CupvaMemRegister/CupvaMemUnregister on CUDA device pointers is not necessary. These APIs will have no effect in this case and will not generate an error in order to preserve compatibility of user code.
Usage considerations
Allowed context for the API call
Thread-safe: No
API group
Init: No
Runtime: No
De-Init: Yes
- Parameters:
ptr – [in] The device pointer pointing to the registered memory.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_INVALID_ARGUMENT indicates one of the following:
ptr is invalid.
CUPVA_DRIVER_API_ERROR Deallocation failed due to driver API error. This error does not apply to CUDA device pointers with driver version >=2006, except for QNX Safety.
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
-
cupvaError_t CupvaSetVPUPrintBufferSize(uint32_t const size)#
Set the size of VPU print buffer for the current context.
The actual size available may be more or less than requested as some driver versions will round the allocation to the next power of 2 then use some of the buffer for internal bookkeeping. This function should only be called when no stream in the current context contains pending commands. Even after waiting for all streams in the current context to become idle, this API can fail spuriously with cupva::Error::OperationPending, as the driver may still be cleaning up previous tasks. In this case, the operation should be retried.
If the size is set to 0, VPU print statements will not take effect.
This API is not available in safety builds.
Usage considerations
Allowed context for the API call
Thread-safe: No
API group
Init: Yes
Runtime: No
De-Init: No
- Parameters:
size – [in] The requested VPU print buffer size. May not exceed cupva::config::MAX_VPU_PRINT_BUFFER_SIZE.
- Returns:
cupvaError_t The completion status of the operation. Possible values are:
CUPVA_ERROR_NONE if the operation was successful.
CUPVA_DRIVER_API_ERROR or CUPVA_INVALID_ARGUMENT if the requested size was too large.
CUPVA_OPERATION_PENDING if the PVA driver is not yet idle
CUPVA_NOT_ALLOWED_IN_OPERATIONAL_STATE if called when NVIDIA DRIVE OS VM state is “Operational”
Typedefs#
-
typedef struct cupvaExecutableRec *cupvaExecutable_t#
A handle to a VPU binary.