TensorRT 10.6.0
nvinfer1::v_1_0::IGpuAllocator Class Referenceabstract

#include <NvInferRuntime.h>

Inheritance diagram for nvinfer1::v_1_0::IGpuAllocator:
nvinfer1::IVersionedInterface nvinfer1::v_1_0::IGpuAsyncAllocator

Public Member Functions

virtual TRT_DEPRECATED void * allocate (uint64_t const size, uint64_t const alignment, AllocatorFlags const flags) noexcept=0
 A thread-safe callback implemented by the application to handle acquisition of GPU memory. More...
 
 ~IGpuAllocator () override=default
 
 IGpuAllocator ()=default
 
virtual void * reallocate (void *const, uint64_t, uint64_t) noexcept
 A thread-safe callback implemented by the application to resize an existing allocation. More...
 
virtual TRT_DEPRECATED bool deallocate (void *const memory) noexcept=0
 A thread-safe callback implemented by the application to handle release of GPU memory. More...
 
virtual void * allocateAsync (uint64_t const size, uint64_t const alignment, AllocatorFlags const flags, cudaStream_t) noexcept
 A thread-safe callback implemented by the application to handle stream-ordered acquisition of GPU memory. More...
 
virtual bool deallocateAsync (void *const memory, cudaStream_t) noexcept
 A thread-safe callback implemented by the application to handle stream-ordered release of GPU memory. More...
 
InterfaceInfo getInterfaceInfo () const noexcept override
 Return version information associated with this interface. Applications must not override this method. More...
 
- Public Member Functions inherited from nvinfer1::IVersionedInterface
virtual APILanguage getAPILanguage () const noexcept
 The language used to build the implementation of this Interface. More...
 
virtual ~IVersionedInterface () noexcept=default
 

Additional Inherited Members

- Protected Member Functions inherited from nvinfer1::IVersionedInterface
 IVersionedInterface ()=default
 
 IVersionedInterface (IVersionedInterface const &)=default
 
 IVersionedInterface (IVersionedInterface &&)=default
 
IVersionedInterfaceoperator= (IVersionedInterface const &) &=default
 
IVersionedInterfaceoperator= (IVersionedInterface &&) &=default
 

Constructor & Destructor Documentation

◆ ~IGpuAllocator()

nvinfer1::v_1_0::IGpuAllocator::~IGpuAllocator ( )
overridedefault

◆ IGpuAllocator()

nvinfer1::v_1_0::IGpuAllocator::IGpuAllocator ( )
default

Member Function Documentation

◆ allocate()

virtual TRT_DEPRECATED void * nvinfer1::v_1_0::IGpuAllocator::allocate ( uint64_t const  size,
uint64_t const  alignment,
AllocatorFlags const  flags 
)
pure virtualnoexcept

A thread-safe callback implemented by the application to handle acquisition of GPU memory.

Parameters
sizeThe size of the memory block required (in bytes).
alignmentThe required alignment of memory. Alignment will be zero or a power of 2 not exceeding the alignment guaranteed by cudaMalloc. Thus this allocator can be safely implemented with cudaMalloc/cudaFree. An alignment value of zero indicates any alignment is acceptable.
flagsReserved for future use. In the current release, 0 will be passed.
Returns
If the allocation was successful, the start address of a device memory block of the requested size. If an allocation request of size 0 is made, nullptr must be returned. If an allocation request cannot be satisfied, nullptr must be returned. If a non-null address is returned, it is guaranteed to have the specified alignment.
Note
The implementation must guarantee thread safety for concurrent allocate/reallocate/deallocate requests.


Usage considerations

  • Allowed context for the API call

    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads.
    Deprecated:
    Deprecated in TensorRT 10.0. Superseded by allocateAsync

Implemented in nvinfer1::v_1_0::IGpuAsyncAllocator.

◆ allocateAsync()

virtual void * nvinfer1::v_1_0::IGpuAllocator::allocateAsync ( uint64_t const  size,
uint64_t const  alignment,
AllocatorFlags const  flags,
cudaStream_t   
)
inlinevirtualnoexcept

A thread-safe callback implemented by the application to handle stream-ordered acquisition of GPU memory.

The default behavior is to call method allocate(), which is synchronous and thus loses any performance benefits of asynchronous allocation. If you want the benefits of asynchronous allocation, see discussion of IGpuAsyncAllocator vs. IGpuAllocator in the documentation for nvinfer1::IGpuAllocator.

Parameters
sizeThe size of the memory block required (in bytes).
alignmentThe required alignment of memory. Alignment will be zero or a power of 2 not exceeding the alignment guaranteed by cudaMalloc. Thus this allocator can be safely implemented with cudaMalloc/cudaFree. An alignment value of zero indicates any alignment is acceptable.
flagsReserved for future use. In the current release, 0 will be passed.
streamspecifies the cudaStream for asynchronous usage.
Returns
If the allocation was successful, the start address of a device memory block of the requested size. If an allocation request of size 0 is made, nullptr must be returned. If an allocation request cannot be satisfied, nullptr must be returned. If a non-null address is returned, it is guaranteed to have the specified alignment.
Note
The implementation must guarantee thread safety for concurrent allocate/reallocate/deallocate requests.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads.

Reimplemented in nvinfer1::v_1_0::IGpuAsyncAllocator.

◆ deallocate()

virtual TRT_DEPRECATED bool nvinfer1::v_1_0::IGpuAllocator::deallocate ( void *const  memory)
pure virtualnoexcept

A thread-safe callback implemented by the application to handle release of GPU memory.

TensorRT may pass a nullptr to this function if it was previously returned by allocate().

Parameters
memoryA memory address that was previously returned by an allocate() or reallocate() call of the same allocator object.
Returns
True if the acquired memory is released successfully.
Note
The implementation must guarantee thread safety for concurrent allocate/reallocate/deallocate requests.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads.
      Deprecated:
      Deprecated in TensorRT 10.0. Superseded by deallocateAsync

Implemented in nvinfer1::v_1_0::IGpuAsyncAllocator.

◆ deallocateAsync()

virtual bool nvinfer1::v_1_0::IGpuAllocator::deallocateAsync ( void *const  memory,
cudaStream_t   
)
inlinevirtualnoexcept

A thread-safe callback implemented by the application to handle stream-ordered release of GPU memory.

The default behavior is to call method deallocate(), which is synchronous and thus loses any performance benefits of asynchronous deallocation. If you want the benefits of asynchronous deallocation, see discussion of IGpuAsyncAllocator vs. IGpuAllocator in the documentation for nvinfer1::IGpuAllocator.

TensorRT may pass a nullptr to this function if it was previously returned by allocate().

Parameters
memoryA memory address that was previously returned by an allocate() or reallocate() call of the same allocator object.
streamspecifies the cudaStream for asynchronous usage.
Returns
True if the acquired memory is released successfully.
Note
The implementation must guarantee thread safety for concurrent allocate/reallocate/deallocate requests.
The implementation is not required to be asynchronous. It is permitted to synchronize, albeit doing so will lose the performance advantage of asynchronous deallocation. Either way, it is critical that it not actually free the memory until the current stream position is reached.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads.

Reimplemented in nvinfer1::v_1_0::IGpuAsyncAllocator.

◆ getInterfaceInfo()

InterfaceInfo nvinfer1::v_1_0::IGpuAllocator::getInterfaceInfo ( ) const
inlineoverridevirtualnoexcept

Return version information associated with this interface. Applications must not override this method.

Implements nvinfer1::IVersionedInterface.

Reimplemented in nvinfer1::v_1_0::IGpuAsyncAllocator.

◆ reallocate()

virtual void * nvinfer1::v_1_0::IGpuAllocator::reallocate ( void * const  ,
uint64_t  ,
uint64_t   
)
inlinevirtualnoexcept

A thread-safe callback implemented by the application to resize an existing allocation.

Only allocations which were allocated with AllocatorFlag::kRESIZABLE will be resized.

Options are one of:

  • resize in place leaving min(oldSize, newSize) bytes unchanged and return the original address
  • move min(oldSize, newSize) bytes to a new location of sufficient size and return its address
  • return nullptr, to indicate that the request could not be fulfilled.

If nullptr is returned, TensorRT will assume that resize() is not implemented, and that the allocation at baseAddr is still valid.

This method is made available for use cases where delegating the resize strategy to the application provides an opportunity to improve memory management. One possible implementation is to allocate a large virtual device buffer and progressively commit physical memory with cuMemMap. CU_MEM_ALLOC_GRANULARITY_RECOMMENDED is suggested in this case.

TensorRT may call realloc to increase the buffer by relatively small amounts.

Parameters
baseAddrthe address of the original allocation, which will have been returned by previously calling allocate() or reallocate() on the same object.
alignmentThe alignment used by the original allocation. This will be the same value that was previously passed to the allocate() or reallocate() call that returned baseAddr.
newSizeThe new memory size required (in bytes).
Returns
The address of the reallocated memory, or nullptr. If a non-null address is returned, it is guaranteed to have the specified alignment.
Note
The implementation must guarantee thread safety for concurrent allocate/reallocate/deallocate requests.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads.

The documentation for this class was generated from the following file:

  Copyright © 2024 NVIDIA Corporation
  Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact