NVIDIA DeepStream SDK API Reference

7.0 Release
infer_cuda_utils.h File Reference

Detailed Description

Header file declaring utility classes for CUDA memory management, CIDA streams and events.

Definition in file infer_cuda_utils.h.

Go to the source code of this file.

Data Structures

class  nvdsinferserver::CudaStream
 Wrapper class for CUDA streams. More...
 
class  nvdsinferserver::CudaEvent
 Wrapper class for CUDA events. More...
 
class  nvdsinferserver::SysMem
 Base class for managing memory allocation. More...
 
class  nvdsinferserver::CudaDeviceMem
 Allocates and manages CUDA device memory. More...
 
class  nvdsinferserver::CudaHostMem
 Allocates and manages CUDA pinned memory. More...
 
class  nvdsinferserver::CpuMem
 Allocates and manages host memory. More...
 
class  nvdsinferserver::CudaTensorBuf
 A batch buffer with CUDA memory allocation. More...
 

Namespaces

 nvdsinferserver
 This is a header file for pre-processing cuda kernels with normalization and mean subtraction required by nvdsinfer.
 

Functions

UniqCudaTensorBuf nvdsinferserver::createTensorBuf (const InferDims &dims, InferDataType dt, int batchSize, const std::string &name, InferMemType mt, int devId, bool initCuEvent)
 Create a tensor buffer of the specified memory type, dimensions on the given device. More...
 
UniqCudaTensorBuf nvdsinferserver::createGpuTensorBuf (const InferDims &dims, InferDataType dt, int batchSize, const std::string &name="", int devId=0, bool initCuEvent=false)
 Create a CUDA device memory tensor buffer of specified dimensions on the given device. More...
 
UniqCudaTensorBuf nvdsinferserver::createCpuTensorBuf (const InferDims &dims, InferDataType dt, int batchSize, const std::string &name="", int devId=0, bool initCuEvent=false)
 Create a CUDA pinned memory tensor buffer of specified dimensions on the given device. More...
 
NvDsInferStatus nvdsinferserver::syncAllCudaEvents (const SharedBatchArray &bufList)
 Synchronize on all events associated with the batch buffer array. More...