NVIDIA DeepStream SDK API Reference

6.4 Release
nvdsinfer Namespace Reference

Detailed Description

Copyright (c) 2019-2021, NVIDIA CORPORATION.

Copyright (c) 2018-2023, NVIDIA CORPORATION.

Copyright (c) 2019-2020, NVIDIA CORPORATION.

All rights reserved.

NVIDIA Corporation and its licensors retain all intellectual property and proprietary rights in and to this software, related documentation and any modifications thereto. Any use, reproduction, disclosure or distribution of this software and related documentation without an express license agreement from NVIDIA Corporation is strictly prohibited.

Data Structures

class  BackendContext
 Abstract interface for managing the actual inferencing implementation. More...
 
class  BaseModelParser
 ModelParser base. More...
 
struct  BuildParams
 Holds build parameters common to implicit batch dimension/full dimension networks. More...
 
class  CaffeModelParser
 Implementation of ModelParser for caffemodels derived from BaseModelParser. More...
 
class  ClassifyPostprocessor
 Implementation of post-processing class for classification networks. More...
 
class  CudaBuffer
 Helper base class for managing Cuda allocated buffers. More...
 
class  CudaDeviceBuffer
 CUDA device buffers. More...
 
class  CudaEvent
 Helper class for managing Cuda events. More...
 
class  CudaHostBuffer
 CUDA host buffers. More...
 
class  CudaStream
 Helper class for managing Cuda Streams. More...
 
class  CustomModelParser
 Implementation of ModelParser for custom models. More...
 
class  DetectPostprocessor
 Implementation of post-processing class for object detection networks. More...
 
class  DlaFullDimTrtBackendContext
 Backend context for implicit batch dimension network inferencing on DLA. More...
 
class  DlaImplicitTrtBackendContext
 Backend context for implicit batch dimension network inferencing on DLA. More...
 
class  DlLibHandle
 
struct  ExplicitBuildParams
 Holds build parameters required for full dimensions network. More...
 
class  FullDimTrtBackendContext
 Backend context for full dimensions network. More...
 
class  GuardQueue
 
struct  ImplicitBuildParams
 Holds build parameters required for implicit batch dimension network. More...
 
class  ImplicitTrtBackendContext
 Backend context for implicit batch dimension network. More...
 
class  InferBatchBuffer
 Abstract interface to manage a batched buffer for inference. More...
 
class  InferPostprocessor
 Base class for post-processing on inference output. More...
 
class  InferPreprocessor
 Provides pre-processing functionality like mean subtraction and normalization. More...
 
class  InstanceSegmentPostprocessor
 Implementation of post-processing class for instance segmentation networks. More...
 
struct  NvDsInferBatch
 Holds information for one batch for processing. More...
 
class  NvDsInferContextImpl
 Implementation of the INvDsInferContext interface. More...
 
class  OnnxModelParser
 Implementation of ModelParser for ONNX models derived from BaseModelParser. More...
 
class  OtherPostprocessor
 
class  SegmentPostprocessor
 Implementation of post-processing class for segmentation networks. More...
 
class  SharedPtrWDestroy
 
class  TrtBackendContext
 Base class for implementations of the BackendContext interface. More...
 
class  TrtEngine
 Helper class written on top of nvinfer1::ICudaEngine. More...
 
class  TrtModelBuilder
 Helper class to build models and generate the TensorRT ICudaEngine required for inference. More...
 
class  UffModelParser
 Implementation of ModelParser for UFF models derived from BaseModelParser. More...
 
class  UniquePtrWDestroy
 

Typedefs

using NvDsInferCudaEngineGetFcnDeprecated = decltype(&NvDsInferCudaEngineGet)
 
using ProfileDims = std::array< nvinfer1::Dims, nvinfer1::EnumMax< nvinfer1::OptProfileSelector >()>
 
using NvDsInferLoggingFunc = std::function< void(NvDsInferLogLevel, const char *msg)>
 

Functions

std::unique_ptr< TrtBackendContextcreateBackendContext (const std::shared_ptr< TrtEngine > &engine)
 Create an instance of a BackendContext. More...
 
const char * safeStr (const char *str)
 
const char * safeStr (const std::string &str)
 
bool string_empty (const char *str)
 
bool file_accessible (const char *path)
 
bool file_accessible (const std::string &path)
 
std::string dims2Str (const nvinfer1::Dims &d)
 
std::string dims2Str (const NvDsInferDims &d)
 
std::string batchDims2Str (const NvDsInferBatchDims &d)
 
std::string dataType2Str (const nvinfer1::DataType type)
 
std::string dataType2Str (const NvDsInferDataType type)
 
std::string networkMode2Str (const NvDsInferNetworkMode type)
 
uint32_t getElementSize (NvDsInferDataType t)
 Get the size of the element from the data type. More...
 
nvinfer1::Dims ds2TrtDims (const NvDsInferDimsCHW &dims)
 
nvinfer1::Dims ds2TrtDims (const NvDsInferDims &dims)
 
NvDsInferDims trt2DsDims (const nvinfer1::Dims &dims)
 
nvinfer1::Dims CombineDimsBatch (const NvDsInferDims &dims, int batch)
 
void SplitFullDims (const nvinfer1::Dims &fullDims, NvDsInferDims &dims, int &batch)
 
void convertFullDims (const nvinfer1::Dims &fullDims, NvDsInferBatchDims &batchDims)
 
void normalizeDims (NvDsInferDims &dims)
 
bool hasWildcard (const nvinfer1::Dims &dims)
 
bool hasWildcard (const NvDsInferDims &dims)
 
bool operator<= (const nvinfer1::Dims &a, const nvinfer1::Dims &b)
 
bool operator> (const nvinfer1::Dims &a, const nvinfer1::Dims &b)
 
bool operator== (const nvinfer1::Dims &a, const nvinfer1::Dims &b)
 
bool operator!= (const nvinfer1::Dims &a, const nvinfer1::Dims &b)
 
bool operator<= (const NvDsInferDims &a, const NvDsInferDims &b)
 
bool operator> (const NvDsInferDims &a, const NvDsInferDims &b)
 
bool operator== (const NvDsInferDims &a, const NvDsInferDims &b)
 
bool operator!= (const NvDsInferDims &a, const NvDsInferDims &b)
 
bool isValidOutputFormat (const std::string &fmt)
 
bool isValidOutputDataType (const std::string &dataType)
 
nvinfer1::DataType str2DataType (const std::string &dataType)
 
uint32_t str2TensorFormat (const std::string &fmt)
 
bool validateIOTensorNames (const BuildParams &params, const nvinfer1::INetworkDefinition &network)
 
bool isValidDeviceType (const std::string &fmt)
 
bool isValidPrecisionType (const std::string &dataType)
 
nvinfer1::DataType str2PrecisionType (const std::string &dataType)
 
nvinfer1::DeviceType str2DeviceType (const std::string &deviceType)
 
void dsInferLogPrint__ (NvDsInferLogLevel level, const char *fmt,...)
 Copyright (c) 2019-2021, NVIDIA CORPORATION. More...
 

Variables

static const size_t kWorkSpaceSize = 450 * 1024 * 1024
 

Typedef Documentation

◆ NvDsInferCudaEngineGetFcnDeprecated

◆ NvDsInferLoggingFunc

using nvdsinfer::NvDsInferLoggingFunc = typedef std::function<void(NvDsInferLogLevel, const char* msg)>

Definition at line 48 of file nvdsinfer_context_impl.h.

◆ ProfileDims

using nvdsinfer::ProfileDims = typedef std::array<nvinfer1::Dims, nvinfer1::EnumMax<nvinfer1::OptProfileSelector>()>

Definition at line 231 of file nvdsinfer_model_builder.h.

Function Documentation

◆ batchDims2Str()

std::string nvdsinfer::batchDims2Str ( const NvDsInferBatchDims &  d)

◆ CombineDimsBatch()

nvinfer1::Dims nvdsinfer::CombineDimsBatch ( const NvDsInferDims dims,
int  batch 
)

◆ convertFullDims()

void nvdsinfer::convertFullDims ( const nvinfer1::Dims &  fullDims,
NvDsInferBatchDims &  batchDims 
)
inline

Definition at line 242 of file nvdsinfer_func_utils.h.

References INFER_EXPORT_API::fullDims(), and SplitFullDims().

◆ createBackendContext()

std::unique_ptr<TrtBackendContext> nvdsinfer::createBackendContext ( const std::shared_ptr< TrtEngine > &  engine)

Create an instance of a BackendContext.

ImplicitTrtBackendContext - created when TRT CudaEngine/network is built with implicit batch dimensions FullDimTrtBackendContext - created when TRT CudaEngine/network is built with full dimensions support DlaTrtBackendContext - created when TRT CudaEngine is built for DLA

◆ dataType2Str() [1/2]

std::string nvdsinfer::dataType2Str ( const NvDsInferDataType  type)

◆ dataType2Str() [2/2]

std::string nvdsinfer::dataType2Str ( const nvinfer1::DataType  type)

◆ dims2Str() [1/2]

std::string nvdsinfer::dims2Str ( const NvDsInferDims d)

◆ dims2Str() [2/2]

std::string nvdsinfer::dims2Str ( const nvinfer1::Dims &  d)

◆ ds2TrtDims() [1/2]

nvinfer1::Dims nvdsinfer::ds2TrtDims ( const NvDsInferDims dims)

◆ ds2TrtDims() [2/2]

nvinfer1::Dims nvdsinfer::ds2TrtDims ( const NvDsInferDimsCHW dims)

◆ file_accessible() [1/2]

bool nvdsinfer::file_accessible ( const char *  path)
inline

Definition at line 95 of file nvdsinfer_func_utils.h.

Referenced by file_accessible().

◆ file_accessible() [2/2]

bool nvdsinfer::file_accessible ( const std::string &  path)
inline

Definition at line 101 of file nvdsinfer_func_utils.h.

References file_accessible().

◆ getElementSize()

uint32_t nvdsinfer::getElementSize ( NvDsInferDataType  t)
inline

Get the size of the element from the data type.

Definition at line 208 of file nvdsinfer_func_utils.h.

References dsInferError, FLOAT, HALF, INT32, and INT8.

◆ hasWildcard() [1/2]

bool nvdsinfer::hasWildcard ( const NvDsInferDims dims)

◆ hasWildcard() [2/2]

bool nvdsinfer::hasWildcard ( const nvinfer1::Dims &  dims)

◆ isValidDeviceType()

bool nvdsinfer::isValidDeviceType ( const std::string &  fmt)

◆ isValidOutputDataType()

bool nvdsinfer::isValidOutputDataType ( const std::string &  dataType)

◆ isValidOutputFormat()

bool nvdsinfer::isValidOutputFormat ( const std::string &  fmt)

◆ isValidPrecisionType()

bool nvdsinfer::isValidPrecisionType ( const std::string &  dataType)

◆ networkMode2Str()

std::string nvdsinfer::networkMode2Str ( const NvDsInferNetworkMode  type)

◆ normalizeDims()

void nvdsinfer::normalizeDims ( NvDsInferDims dims)

◆ operator!=() [1/2]

bool nvdsinfer::operator!= ( const NvDsInferDims a,
const NvDsInferDims b 
)

◆ operator!=() [2/2]

bool nvdsinfer::operator!= ( const nvinfer1::Dims &  a,
const nvinfer1::Dims &  b 
)

◆ operator<=() [1/2]

bool nvdsinfer::operator<= ( const NvDsInferDims a,
const NvDsInferDims b 
)

◆ operator<=() [2/2]

bool nvdsinfer::operator<= ( const nvinfer1::Dims &  a,
const nvinfer1::Dims &  b 
)

◆ operator==() [1/2]

bool nvdsinfer::operator== ( const NvDsInferDims a,
const NvDsInferDims b 
)

◆ operator==() [2/2]

bool nvdsinfer::operator== ( const nvinfer1::Dims &  a,
const nvinfer1::Dims &  b 
)

◆ operator>() [1/2]

bool nvdsinfer::operator> ( const NvDsInferDims a,
const NvDsInferDims b 
)

◆ operator>() [2/2]

bool nvdsinfer::operator> ( const nvinfer1::Dims &  a,
const nvinfer1::Dims &  b 
)

◆ safeStr() [1/2]

const char* nvdsinfer::safeStr ( const char *  str)
inline

Definition at line 80 of file nvdsinfer_func_utils.h.

Referenced by nvdsinfer::CustomModelParser::getModelName().

◆ safeStr() [2/2]

const char* nvdsinfer::safeStr ( const std::string &  str)
inline

Definition at line 85 of file nvdsinfer_func_utils.h.

◆ SplitFullDims()

void nvdsinfer::SplitFullDims ( const nvinfer1::Dims &  fullDims,
NvDsInferDims dims,
int &  batch 
)

Referenced by convertFullDims().

◆ str2DataType()

nvinfer1::DataType nvdsinfer::str2DataType ( const std::string &  dataType)

◆ str2DeviceType()

nvinfer1::DeviceType nvdsinfer::str2DeviceType ( const std::string &  deviceType)

◆ str2PrecisionType()

nvinfer1::DataType nvdsinfer::str2PrecisionType ( const std::string &  dataType)

◆ str2TensorFormat()

uint32_t nvdsinfer::str2TensorFormat ( const std::string &  fmt)

◆ string_empty()

bool nvdsinfer::string_empty ( const char *  str)
inline

Definition at line 90 of file nvdsinfer_func_utils.h.

Referenced by nvdsinfer::DlLibHandle::symbol().

◆ trt2DsDims()

NvDsInferDims nvdsinfer::trt2DsDims ( const nvinfer1::Dims &  dims)

◆ validateIOTensorNames()

bool nvdsinfer::validateIOTensorNames ( const BuildParams params,
const nvinfer1::INetworkDefinition &  network 
)

Variable Documentation

◆ kWorkSpaceSize

const size_t nvdsinfer::kWorkSpaceSize = 450 * 1024 * 1024
static

Definition at line 45 of file nvdsinfer_model_builder.h.