GetHostPointer#

Fully qualified name: cupva::mem::GetHostPointer

Defined in src/host/cpp_api/include/cupva_host.hpp

void *cupva::mem::GetHostPointer(void *const devicePtr)#

Get the host CPU mapped pointer for the given PVA pointer.

Returns a pointer to host mapped device memory allocated by mem::Alloc(), imported by mem::ImportFromHostPtr(), imported by nvsci::mem::Import() functions or registered CUDA device pointers. The memory is automatically unmapped when the underlying device memory is freed.

Not all memories are mappable to CPU pointers:

  • CupvaMemAlloc and CupvaImportFromHostPtr memories are always mappable.

  • Mapping CUDA memory to host depends on CUDA’s access policies. Under current policies, pinned host memory (e.g. cudaMallocHost) is mappable. GPU cached device memory (e.g. cudaMalloc) is not.

  • For legacy reasons, this API supports obtaining a CPU mapped host pointer to device pointers imported from NvSciBuf using CupvaMemImport, if the internal allocation allows CPU mapping of the buffer. This behavior is deprecated and will be removed in a future release. Users should instead use NvSciBuf APIs directly to map CPU host pointers for NvSciBuf allocations.

Usage considerations

  • Allowed context for the API call

    • Thread-safe: Yes

  • API group

    • Init: Yes

    • Runtime: No

    • De-Init: No

Parameters:

devicePtr[in] The input device pointer which can only be accessed by PVA.

Throws:
  • cupva::Exception(InvalidArgument) – The device pointer value or type is invalid or pointer is not mappable.

  • cupva::Exception(DriverAPIError) – Getting CPU mapped pointer failed due to driver API error.

  • cupva::Exception(NotAllowedInOperationalState) – if current context is null and default context does not exist and NVIDIA DRIVE OS VM state is “Operational”

Returns:

The mapped host pointer which can be accessed by CPU user code.