NVIDIA DeepStream SDK API Reference

7.0 Release
_NvDsInferContextInitParams Struct Reference

Detailed Description

Holds the initialization parameters required for the NvDsInferContext interface.

Definition at line 239 of file nvdsinfer_context.h.

Collaboration diagram for _NvDsInferContextInitParams:

Data Fields

unsigned int uniqueID
 Holds a unique identifier for the instance. More...
 
NvDsInferNetworkMode networkMode
 Holds an internal data format specifier used by the inference engine. More...
 
char protoFilePath [_PATH_MAX]
 Holds the pathname of the prototxt file. More...
 
char modelFilePath [_PATH_MAX]
 Holds the pathname of the caffemodel file. More...
 
char uffFilePath [_PATH_MAX]
 Holds the pathname of the UFF model file. More...
 
char onnxFilePath [_PATH_MAX]
 Holds the pathname of the ONNX model file. More...
 
char tltEncodedModelFilePath [_PATH_MAX]
 Holds the pathname of the TLT encoded model file. More...
 
char int8CalibrationFilePath [_PATH_MAX]
 Holds the pathname of the INT8 calibration file. More...
 
union {
   NvDsInferDimsCHW   inputDims
 Holds the input dimensions for the model. More...
 
   NvDsInferDimsCHW   uffDimsCHW
 Holds the input dimensions for the UFF model. More...
 
instead
 
NvDsInferTensorOrder uffInputOrder
 Holds the original input order for the UFF model. More...
 
char uffInputBlobName [_MAX_STR_LENGTH]
 Holds the name of the input layer for the UFF model. More...
 
NvDsInferTensorOrder netInputOrder
 Holds the original input order for the network. More...
 
char tltModelKey [_MAX_STR_LENGTH]
 Holds the string key for decoding the TLT encoded model. More...
 
char modelEngineFilePath [_PATH_MAX]
 Holds the pathname of the serialized model engine file. More...
 
unsigned int maxBatchSize
 Holds the maximum number of frames to be inferred together in a batch. More...
 
char labelsFilePath [_PATH_MAX]
 Holds the pathname of the labels file containing strings for the class labels. More...
 
char meanImageFilePath [_PATH_MAX]
 Holds the pathname of the mean image file (PPM format). More...
 
float networkScaleFactor
 Holds the normalization factor with which to scale the input pixels. More...
 
NvDsInferFormat networkInputFormat
 Holds the network input format. More...
 
float offsets [_MAX_CHANNELS]
 Holds the per-channel offsets for mean subtraction. More...
 
unsigned int numOffsets
 
NvDsInferNetworkType networkType
 Holds the network type. More...
 
Use NvDsInferClusterMode instead int useDBScan
 Holds a Boolean; true if DBScan is to be used for object clustering, or false if OpenCV groupRectangles is to be used. More...
 
unsigned int numDetectedClasses
 Holds the number of classes detected by a detector network. More...
 
NvDsInferDetectionParamsperClassDetectionParams
 Holds per-class detection parameters. More...
 
float classifierThreshold
 Holds the minimum confidence threshold for the classifier to consider a label valid. More...
 
float segmentationThreshold
 
char ** outputLayerNames
 Holds a pointer to an array of pointers to output layer names. More...
 
unsigned int numOutputLayers
 Holds the number of output layer names. More...
 
char customLibPath [_PATH_MAX]
 Holds the pathname of the library containing custom methods required to support the network. More...
 
char customBBoxParseFuncName [_MAX_STR_LENGTH]
 Holds the name of the custom bounding box function in the custom library. More...
 
char customClassifierParseFuncName [_MAX_STR_LENGTH]
 Name of the custom classifier attribute parsing function in the custom library. More...
 
int copyInputToHostBuffers
 Holds a Boolean; true if the input layer contents are to be copied to host memory for access by the application. More...
 
unsigned int gpuID
 Holds the ID of the GPU which is to run the inference. More...
 
int useDLA
 Holds a Boolean; true if DLA is to be used. More...
 
int dlaCore
 Holds the ID of the DLA core to use. More...
 
unsigned int outputBufferPoolSize
 Holds the number of sets of output buffers (host and device) to be allocated. More...
 
char customNetworkConfigFilePath [_PATH_MAX]
 Holds the pathname of the configuration file for custom network creation. More...
 
char customEngineCreateFuncName [_MAX_STR_LENGTH]
 Name of the custom engine creation function in the custom library. More...
 
int forceImplicitBatchDimension
 For model parsers supporting both implicit batch dim and full dims, prefer to use implicit batch dim. More...
 
unsigned int workspaceSize
 Max workspace size (unit MB) that will be used as tensorrt build settings for cuda engine. More...
 
NvDsInferDimsCHW inferInputDims
 Inference input dimensions for runtime engine. More...
 
NvDsInferClusterMode clusterMode
 Holds the type of clustering mode. More...
 
char customBBoxInstanceMaskParseFuncName [_MAX_STR_LENGTH]
 Holds the name of the bounding box and instance mask parse function in the custom library. More...
 
char ** outputIOFormats
 Can be used to specify the format and datatype for bound output layers. More...
 
unsigned int numOutputIOFormats
 Holds number of output IO formats specified. More...
 
char ** layerDevicePrecisions
 Can be used to specify the device type and inference precision of layers. More...
 
unsigned int numLayerDevicePrecisions
 Holds number of layer device precisions specified. More...
 
NvDsInferTensorOrder segmentationOutputOrder
 Holds output order for segmentation network. More...
 
int inputFromPreprocessedTensor
 Boolean flag indicating that caller will supply preprocessed tensors for inferencing. More...
 
int disableOutputHostCopy
 Boolean flag indicating that whether we will post processing on GPU if this flag enabled, nvinfer will return gpu buffer to prossprocessing user must write cuda post processing code. More...
 
int autoIncMem
 Boolean flag indicating that whether we will automatically increase bufferpool size when facing a bottleneck. More...
 
double maxGPUMemPer
 Max gpu memory that can be occupied while expanding the bufferpool. More...
 
int dumpIpTensor
 Boolean flag indicating whether or not to dump raw input tensor data. More...
 
int dumpOpTensor
 Boolean flag indicating whether or not to dump raw input tensor data. More...
 
int overwriteIpTensor
 Boolean flag indicating whether or not to overwrite raw input tensor data provided by the user into the buffer for inference. More...
 
char ipTensorFilePath [_PATH_MAX]
 Path to the raw input tensor data that is going to be used to overwrite the buffer. More...
 
int overwriteOpTensor
 Boolean flag indicating whether or not to overwrite raw ouput tensor data provided by the user into the buffer for inference. More...
 
char ** opTensorFilePath
 List of paths to the raw output tensor data that are going to be used to overwrite the different output buffers. More...
 

Field Documentation

◆ autoIncMem

int _NvDsInferContextInitParams::autoIncMem

Boolean flag indicating that whether we will automatically increase bufferpool size when facing a bottleneck.

Definition at line 425 of file nvdsinfer_context.h.

◆ classifierThreshold

float _NvDsInferContextInitParams::classifierThreshold

Holds the minimum confidence threshold for the classifier to consider a label valid.

Definition at line 329 of file nvdsinfer_context.h.

◆ clusterMode

NvDsInferClusterMode _NvDsInferContextInitParams::clusterMode

Holds the type of clustering mode.

Definition at line 387 of file nvdsinfer_context.h.

◆ copyInputToHostBuffers

int _NvDsInferContextInitParams::copyInputToHostBuffers

Holds a Boolean; true if the input layer contents are to be copied to host memory for access by the application.

Definition at line 351 of file nvdsinfer_context.h.

◆ customBBoxInstanceMaskParseFuncName

char _NvDsInferContextInitParams::customBBoxInstanceMaskParseFuncName[_MAX_STR_LENGTH]

Holds the name of the bounding box and instance mask parse function in the custom library.

Definition at line 391 of file nvdsinfer_context.h.

◆ customBBoxParseFuncName

char _NvDsInferContextInitParams::customBBoxParseFuncName[_MAX_STR_LENGTH]

Holds the name of the custom bounding box function in the custom library.

Definition at line 344 of file nvdsinfer_context.h.

◆ customClassifierParseFuncName

char _NvDsInferContextInitParams::customClassifierParseFuncName[_MAX_STR_LENGTH]

Name of the custom classifier attribute parsing function in the custom library.

Definition at line 347 of file nvdsinfer_context.h.

◆ customEngineCreateFuncName

char _NvDsInferContextInitParams::customEngineCreateFuncName[_MAX_STR_LENGTH]

Name of the custom engine creation function in the custom library.

Definition at line 371 of file nvdsinfer_context.h.

◆ customLibPath

char _NvDsInferContextInitParams::customLibPath[_PATH_MAX]

Holds the pathname of the library containing custom methods required to support the network.

Definition at line 341 of file nvdsinfer_context.h.

◆ customNetworkConfigFilePath

char _NvDsInferContextInitParams::customNetworkConfigFilePath[_PATH_MAX]

Holds the pathname of the configuration file for custom network creation.

This can be used to store custom properties required by the custom network creation function.

Definition at line 368 of file nvdsinfer_context.h.

◆ disableOutputHostCopy

int _NvDsInferContextInitParams::disableOutputHostCopy

Boolean flag indicating that whether we will post processing on GPU if this flag enabled, nvinfer will return gpu buffer to prossprocessing user must write cuda post processing code.

Definition at line 420 of file nvdsinfer_context.h.

◆ dlaCore

int _NvDsInferContextInitParams::dlaCore

Holds the ID of the DLA core to use.

Definition at line 359 of file nvdsinfer_context.h.

◆ dumpIpTensor

int _NvDsInferContextInitParams::dumpIpTensor

Boolean flag indicating whether or not to dump raw input tensor data.

Definition at line 433 of file nvdsinfer_context.h.

◆ dumpOpTensor

int _NvDsInferContextInitParams::dumpOpTensor

Boolean flag indicating whether or not to dump raw input tensor data.

Definition at line 436 of file nvdsinfer_context.h.

◆ forceImplicitBatchDimension

int _NvDsInferContextInitParams::forceImplicitBatchDimension

For model parsers supporting both implicit batch dim and full dims, prefer to use implicit batch dim.

By default, full dims network mode is used.

Definition at line 376 of file nvdsinfer_context.h.

◆ gpuID

unsigned int _NvDsInferContextInitParams::gpuID

Holds the ID of the GPU which is to run the inference.

Definition at line 354 of file nvdsinfer_context.h.

◆ inferInputDims

NvDsInferDimsCHW _NvDsInferContextInitParams::inferInputDims

Inference input dimensions for runtime engine.

Definition at line 384 of file nvdsinfer_context.h.

◆ inputDims

NvDsInferDimsCHW _NvDsInferContextInitParams::inputDims

Holds the input dimensions for the model.

Definition at line 265 of file nvdsinfer_context.h.

◆ inputFromPreprocessedTensor

int _NvDsInferContextInitParams::inputFromPreprocessedTensor

Boolean flag indicating that caller will supply preprocessed tensors for inferencing.

NvDsInferContext will skip preprocessing initialization steps and will not interpret network input layer dimensions.

Definition at line 414 of file nvdsinfer_context.h.

◆ instead

union { ... } Use inferInputDims _NvDsInferContextInitParams::instead

Definition at line 263 of file nvdsinfer_context.h.

◆ int8CalibrationFilePath

char _NvDsInferContextInitParams::int8CalibrationFilePath[_PATH_MAX]

Holds the pathname of the INT8 calibration file.

Required only when using INT8 mode.

Definition at line 261 of file nvdsinfer_context.h.

◆ ipTensorFilePath

char _NvDsInferContextInitParams::ipTensorFilePath[_PATH_MAX]

Path to the raw input tensor data that is going to be used to overwrite the buffer.

Definition at line 444 of file nvdsinfer_context.h.

◆ labelsFilePath

char _NvDsInferContextInitParams::labelsFilePath[_PATH_MAX]

Holds the pathname of the labels file containing strings for the class labels.

The labels file is optional. The file format is described in the custom models section of the DeepStream SDK documentation.

Definition at line 294 of file nvdsinfer_context.h.

◆ layerDevicePrecisions

char** _NvDsInferContextInitParams::layerDevicePrecisions

Can be used to specify the device type and inference precision of layers.

For each layer specified the format is "<layer-name>:<device-type>:<precision>"

Definition at line 403 of file nvdsinfer_context.h.

◆ maxBatchSize

unsigned int _NvDsInferContextInitParams::maxBatchSize

Holds the maximum number of frames to be inferred together in a batch.

The number of input frames in a batch must be less than or equal to this.

Definition at line 289 of file nvdsinfer_context.h.

◆ maxGPUMemPer

double _NvDsInferContextInitParams::maxGPUMemPer

Max gpu memory that can be occupied while expanding the bufferpool.

Definition at line 429 of file nvdsinfer_context.h.

◆ meanImageFilePath

char _NvDsInferContextInitParams::meanImageFilePath[_PATH_MAX]

Holds the pathname of the mean image file (PPM format).

File resolution must be equal to the network input resolution.

Definition at line 298 of file nvdsinfer_context.h.

◆ modelEngineFilePath

char _NvDsInferContextInitParams::modelEngineFilePath[_PATH_MAX]

Holds the pathname of the serialized model engine file.

When using the model engine file, other parameters required for creating the model engine are ignored.

Definition at line 284 of file nvdsinfer_context.h.

◆ modelFilePath

char _NvDsInferContextInitParams::modelFilePath[_PATH_MAX]

Holds the pathname of the caffemodel file.

Definition at line 251 of file nvdsinfer_context.h.

◆ netInputOrder

NvDsInferTensorOrder _NvDsInferContextInitParams::netInputOrder

Holds the original input order for the network.

Definition at line 276 of file nvdsinfer_context.h.

◆ networkInputFormat

NvDsInferFormat _NvDsInferContextInitParams::networkInputFormat

Holds the network input format.

Definition at line 304 of file nvdsinfer_context.h.

◆ networkMode

NvDsInferNetworkMode _NvDsInferContextInitParams::networkMode

Holds an internal data format specifier used by the inference engine.

Definition at line 246 of file nvdsinfer_context.h.

◆ networkScaleFactor

float _NvDsInferContextInitParams::networkScaleFactor

Holds the normalization factor with which to scale the input pixels.

Definition at line 301 of file nvdsinfer_context.h.

◆ networkType

NvDsInferNetworkType _NvDsInferContextInitParams::networkType

Holds the network type.

Definition at line 313 of file nvdsinfer_context.h.

◆ numDetectedClasses

unsigned int _NvDsInferContextInitParams::numDetectedClasses

Holds the number of classes detected by a detector network.

Definition at line 321 of file nvdsinfer_context.h.

◆ numLayerDevicePrecisions

unsigned int _NvDsInferContextInitParams::numLayerDevicePrecisions

Holds number of layer device precisions specified.

Definition at line 405 of file nvdsinfer_context.h.

◆ numOffsets

unsigned int _NvDsInferContextInitParams::numOffsets

Definition at line 310 of file nvdsinfer_context.h.

◆ numOutputIOFormats

unsigned int _NvDsInferContextInitParams::numOutputIOFormats

Holds number of output IO formats specified.

Definition at line 398 of file nvdsinfer_context.h.

◆ numOutputLayers

unsigned int _NvDsInferContextInitParams::numOutputLayers

Holds the number of output layer names.

Definition at line 336 of file nvdsinfer_context.h.

◆ offsets

float _NvDsInferContextInitParams::offsets[_MAX_CHANNELS]

Holds the per-channel offsets for mean subtraction.

This is an alternative to the mean image file. The number of offsets in the array must be equal to the number of input channels.

Definition at line 309 of file nvdsinfer_context.h.

◆ onnxFilePath

char _NvDsInferContextInitParams::onnxFilePath[_PATH_MAX]

Holds the pathname of the ONNX model file.

Definition at line 255 of file nvdsinfer_context.h.

◆ opTensorFilePath

char** _NvDsInferContextInitParams::opTensorFilePath

List of paths to the raw output tensor data that are going to be used to overwrite the different output buffers.

Definition at line 452 of file nvdsinfer_context.h.

◆ outputBufferPoolSize

unsigned int _NvDsInferContextInitParams::outputBufferPoolSize

Holds the number of sets of output buffers (host and device) to be allocated.

Definition at line 363 of file nvdsinfer_context.h.

◆ outputIOFormats

char** _NvDsInferContextInitParams::outputIOFormats

Can be used to specify the format and datatype for bound output layers.

For each layer specified the format is "<layer-name>:<data-type>:<format>"

Definition at line 396 of file nvdsinfer_context.h.

◆ outputLayerNames

char** _NvDsInferContextInitParams::outputLayerNames

Holds a pointer to an array of pointers to output layer names.

Definition at line 334 of file nvdsinfer_context.h.

◆ overwriteIpTensor

int _NvDsInferContextInitParams::overwriteIpTensor

Boolean flag indicating whether or not to overwrite raw input tensor data provided by the user into the buffer for inference.

Definition at line 440 of file nvdsinfer_context.h.

◆ overwriteOpTensor

int _NvDsInferContextInitParams::overwriteOpTensor

Boolean flag indicating whether or not to overwrite raw ouput tensor data provided by the user into the buffer for inference.

Definition at line 448 of file nvdsinfer_context.h.

◆ perClassDetectionParams

NvDsInferDetectionParams* _NvDsInferContextInitParams::perClassDetectionParams

Holds per-class detection parameters.

The array's size must be equal to numDetectedClasses.

Definition at line 325 of file nvdsinfer_context.h.

◆ protoFilePath

char _NvDsInferContextInitParams::protoFilePath[_PATH_MAX]

Holds the pathname of the prototxt file.

Definition at line 249 of file nvdsinfer_context.h.

◆ segmentationOutputOrder

NvDsInferTensorOrder _NvDsInferContextInitParams::segmentationOutputOrder

Holds output order for segmentation network.

Definition at line 408 of file nvdsinfer_context.h.

◆ segmentationThreshold

float _NvDsInferContextInitParams::segmentationThreshold

Definition at line 331 of file nvdsinfer_context.h.

◆ tltEncodedModelFilePath

char _NvDsInferContextInitParams::tltEncodedModelFilePath[_PATH_MAX]

Holds the pathname of the TLT encoded model file.

Definition at line 257 of file nvdsinfer_context.h.

◆ tltModelKey

char _NvDsInferContextInitParams::tltModelKey[_MAX_STR_LENGTH]

Holds the string key for decoding the TLT encoded model.

Definition at line 279 of file nvdsinfer_context.h.

◆ uffDimsCHW

NvDsInferDimsCHW _NvDsInferContextInitParams::uffDimsCHW

Holds the input dimensions for the UFF model.

Definition at line 267 of file nvdsinfer_context.h.

◆ uffFilePath

char _NvDsInferContextInitParams::uffFilePath[_PATH_MAX]

Holds the pathname of the UFF model file.

Definition at line 253 of file nvdsinfer_context.h.

◆ uffInputBlobName

char _NvDsInferContextInitParams::uffInputBlobName[_MAX_STR_LENGTH]

Holds the name of the input layer for the UFF model.

Definition at line 273 of file nvdsinfer_context.h.

◆ uffInputOrder

NvDsInferTensorOrder _NvDsInferContextInitParams::uffInputOrder

Holds the original input order for the UFF model.

Definition at line 271 of file nvdsinfer_context.h.

◆ uniqueID

unsigned int _NvDsInferContextInitParams::uniqueID

Holds a unique identifier for the instance.

This can be used to identify the instance that is generating log and error messages.

Definition at line 243 of file nvdsinfer_context.h.

◆ useDBScan

Use NvDsInferClusterMode instead int _NvDsInferContextInitParams::useDBScan

Holds a Boolean; true if DBScan is to be used for object clustering, or false if OpenCV groupRectangles is to be used.

Definition at line 318 of file nvdsinfer_context.h.

◆ useDLA

int _NvDsInferContextInitParams::useDLA

Holds a Boolean; true if DLA is to be used.

Definition at line 357 of file nvdsinfer_context.h.

◆ workspaceSize

unsigned int _NvDsInferContextInitParams::workspaceSize

Max workspace size (unit MB) that will be used as tensorrt build settings for cuda engine.

Definition at line 381 of file nvdsinfer_context.h.


The documentation for this struct was generated from the following file: