|
|
NVIDIA DeepStream SDK API Reference
|
8.0 Release
|
Go to the documentation of this file.
84 #ifndef __NVDSINFER_CONTEXT_H__
85 #define __NVDSINFER_CONTEXT_H__
97 #define _PATH_MAX 4096
101 #define _MAX_CHANNELS 4
104 #define _MAX_STR_LENGTH 1024
107 #define NVDSINFER_MAX_BATCH_SIZE 1024
111 #define NVDSINFER_MIN_OUTPUT_BUFFERPOOL_SIZE 2
187 #define NvDsInferUffOrder _Pragma \
188 ("GCC warning \"'NvDsInferUffOrder' macro is deprecated. Use NvDsInferTensorOrder instead.\"") \
200 float preClusterThreshold;
205 float postClusterThreshold;
221 float nmsIOUThreshold;
506 unsigned int numInputFrames;
510 unsigned int inputPitch;
515 void *returnFuncData;
522 unsigned int numInputTensors;
527 void *returnFuncData;
552 unsigned int mask_width;
554 unsigned int mask_height;
556 unsigned int mask_size;
559 float rotation_angle;
571 unsigned int numObjects;
585 unsigned int numAttributes;
601 unsigned int classes;
608 float *class_probability_map;
645 unsigned int numFrames;
649 void **outputDeviceBuffers;
651 unsigned int numOutputDeviceBuffers;
657 unsigned int numHostBuffers;
709 const char *key,
const char *value);
741 void *builderConfig);
751 _DS_DEPRECATED_(
"NvDsInferContext_GetStatusName is deprecated. Use NvDsInferStatus2Str instead")
774 struct INvDsInferContext
824 virtual void fillLayersInfo(std::vector<NvDsInferLayerInfo> &layersInfo) = 0;
841 virtual const std::vector< std::vector<std::string> >& getLabels() = 0;
846 virtual void destroy() = 0;
849 virtual ~INvDsInferContext() {}
878 void *userCtx =
nullptr,
1002 unsigned int id,
unsigned int value);
int inputFromPreprocessedTensor
Boolean flag indicating that caller will supply preprocessed tensors for inferencing.
NvDsInferStatus NvDsInferContext_Create(NvDsInferContextHandle *handle, NvDsInferContextInitParams *initParams, void *userCtx, NvDsInferContextLoggingFunc logFunc)
Creates a new NvDsInferContext object with specified initialization parameters.
int NvDsInferContext_SetDynamicProperty(NvDsInferContextInitParams *initParams, const char *key, const char *value)
Dynamic property management functions for extensible TensorRT/trtexec flags.
unsigned int workspaceSize
Max workspace size (unit MB) that will be used as tensorrt build settings for cuda engine.
NvDsInferTensorOrder
Defines UFF input layer orders.
NvDsInferTensorOrder uffInputOrder
Holds the original input order for the UFF model.
@ NvDsInferNetworkType_Classifier
Specifies a classifier.
unsigned int NvDsInferContext_GetNumLayersInfo(NvDsInferContextHandle handle)
Gets the number of the bound layers of the inference engine in an NvDsInferContext instance.
char uffFilePath[_PATH_MAX]
Holds the pathname of the UFF model file.
char customBBoxParseFuncName[_MAX_STR_LENGTH]
Holds the name of the custom bounding box function in the custom library.
@ NvDsInferNetworkMode_INT8
int dumpIpTensor
Boolean flag indicating whether or not to dump raw input tensor data.
NvDsInferFormat
Defines color formats.
NvDsInferTensorOrder netInputOrder
Holds the original input order for the network.
double maxGPUMemPer
Max gpu memory that can be occupied while expanding the bufferpool.
char meanImageFilePath[_PATH_MAX]
Holds the pathname of the mean image file (PPM format).
char ** layerDevicePrecisions
Can be used to specify the device type and inference precision of layers.
unsigned int maxBatchSize
Holds the maximum number of frames to be inferred together in a batch.
@ NVDSINFER_CLUSTER_GROUP_RECTANGLES
char ** dynamicPropertyKeys
Dynamic properties for TensorRT/trtexec flags.
@ NvDsInferUffOrder_kNHWC
int copyInputToHostBuffers
Holds a Boolean; true if the input layer contents are to be copied to host memory for access by the a...
@ NvDsInferNetworkMode_FP16
@ NVDSINFER_CLUSTER_DBSCAN
NvDsInferNetworkType networkType
Holds the network type.
NvDsInferDimsCHW uffDimsCHW
Holds the input dimensions for the UFF model.
int overwriteOpTensor
Boolean flag indicating whether or not to overwrite raw ouput tensor data provided by the user into t...
unsigned int gpuID
Holds the ID of the GPU which is to run the inference.
@ NvDsInferNetworkMode_BEST
char labelsFilePath[_PATH_MAX]
Holds the pathname of the labels file containing strings for the class labels.
Holds information parsed from segmentation network output for one frame.
unsigned int numDetectedClasses
Holds the number of classes detected by a detector network.
NvDsInferContext_GetStatusName is deprecated Use NvDsInferStatus2Str const instead char * NvDsInferContext_GetStatusName(NvDsInferStatus status)
Gets the string name of the status.
int disableOutputHostCopy
Boolean flag indicating that whether we will post processing on GPU if this flag enabled,...
void NvDsInferContext_ApplyDynamicPropertiesToBuilder(const NvDsInferContextInitParams *initParams, void *builderConfig)
Apply dynamic properties to TensorRT builder configuration.
void NvDsInferContext_ReleaseBatchOutput(NvDsInferContextHandle handle, NvDsInferContextBatchOutput *batchOutput)
Frees the memory associated with the batch output and releases the set of host buffers back to the co...
unsigned int outputBufferPoolSize
Holds the number of sets of output buffers (host and device) to be allocated.
int dlaCore
Holds the ID of the DLA core to use.
unsigned int uniqueID
Holds a unique identifier for the instance.
NvDsInferLogLevel
Enum for the log levels of NvDsInferContext.
char customBBoxInstanceMaskParseFuncName[_MAX_STR_LENGTH]
Holds the name of the bounding box and instance mask parse function in the custom library.
Use NvDsInferClusterMode instead int useDBScan
Holds a Boolean; true if DBScan is to be used for object clustering, or false if OpenCV groupRectangl...
NvDsInferFormat networkInputFormat
Holds the network input format.
@ NVDSINFER_CLUSTER_DBSCAN_NMS_HYBRID
#define _MAX_STR_LENGTH
Defines the maximum length of string parameters.
Holds information on all attributes classifed by a classifier network for one frame.
float segmentationThreshold
@ NvDsInferNetworkType_Detector
Specifies a detector.
char uffInputBlobName[_MAX_STR_LENGTH]
Holds the name of the input layer for the UFF model.
NvDsInferNetworkMode
Defines internal data formats used by the inference engine.
Holds information about one batch to be inferred.
void(* NvDsInferContextReturnInputAsyncFunc)(void *data)
Defines a callback function type for asynchronously returning the input client buffers to the NvDsInf...
char customSegmentationParseFuncName[_MAX_STR_LENGTH]
Holds the name of segmentation parse function in the custom library.
@ NvDsInferFormat_RGB
Specifies 24-bit interleaved R-G-B format.
int dumpOpTensor
Boolean flag indicating whether or not to dump raw input tensor data.
NvDsInferClusterMode
Enum for clustering mode for detectors.
char protoFilePath[_PATH_MAX]
Holds the pathname of the prototxt file.
@ NvDsInferTensorOrder_kNC
@ NvDsInferNetworkMode_FP32
char ** outputLayerNames
Holds a pointer to an array of pointers to output layer names.
Holds information on all objects detected by a detector network in one frame.
unsigned int numOutputLayers
Holds the number of output layer names.
char customLibPath[_PATH_MAX]
Holds the pathname of the library containing custom methods required to support the network.
int warmupEngine
Boolean flag indicating whether to run TensorRT engine warmup during initialization.
Holds information about the model network.
float classifierThreshold
Holds the minimum confidence threshold for the classifier to consider a label valid.
Holds information about one layer in the model.
@ NvDsInferTensorOrder_kNHWC
NvDsInferDetectionParams * perClassDetectionParams
Holds per-class detection parameters.
char customEngineCreateFuncName[_MAX_STR_LENGTH]
Name of the custom engine creation function in the custom library.
struct INvDsInferContext * NvDsInferContextHandle
An opaque pointer type to be used as a handle for a context instance.
#define _PATH_MAX
Maximum length of a file path parameter.
float offsets[_MAX_CHANNELS]
Holds the per-channel offsets for mean subtraction.
int forceImplicitBatchDimension
For model parsers supporting both implicit batch dim and full dims, prefer to use implicit batch dim.
@ NvDsInferFormat_GRAY
Specifies 8-bit Luma format.
char tltModelKey[_MAX_STR_LENGTH]
Holds the string key for decoding the TLT encoded model.
unsigned int numOutputIOFormats
Holds number of output IO formats specified.
unsigned int numLayerDevicePrecisions
Holds number of layer device precisions specified.
void NvDsInferContext_ResetInitParams(NvDsInferContextInitParams *initParams)
Resets a context parameter structure to default values.
@ NvDsInferTensorOrder_kNCHW
int autoIncMem
Boolean flag indicating that whether we will automatically increase bufferpool size when facing a bot...
Holds the dimensions of a three-dimensional layer.
char modelEngineFilePath[_PATH_MAX]
Holds the pathname of the serialized model engine file.
@ NvDsInferFormat_RGBA
Specifies 32-bit interleaved R-G-B-A format.
#define _MAX_CHANNELS
Defines the maximum number of channels supported by the API for image input layers.
int overwriteIpTensor
Boolean flag indicating whether or not to overwrite raw input tensor data provided by the user into t...
char ** dynamicPropertyValues
NvDsInferStatus NvDsInferContext_DequeueOutputBatch(NvDsInferContextHandle handle, NvDsInferContextBatchOutput *batchOutput)
Dequeues output for a batch of frames.
void NvDsInferContext_Destroy(NvDsInferContextHandle handle)
Destroys an NvDsInferContext instance and releases its resources.
char customClassifierParseFuncName[_MAX_STR_LENGTH]
Name of the custom classifier attribute parsing function in the custom library.
char modelFilePath[_PATH_MAX]
Holds the pathname of the caffemodel file.
#define _DS_DEPRECATED_(STR)
NvDsInferStatus NvDsInferContext_QueueInputBatch(NvDsInferContextHandle handle, NvDsInferContextBatchInput *batchInput)
Queues a batch of input frames for preprocessing and inferencing.
NvDsInferTensorOrder segmentationOutputOrder
Holds output order for segmentation network.
@ NvDsInferNetworkType_InstanceSegmentation
Specifies a instance segmentation network.
Holds the initialization parameters required for the NvDsInferContext interface.
@ NvDsInferFormat_Unknown
char ** outputIOFormats
Can be used to specify the format and datatype for bound output layers.
@ NvDsInferUffOrder_kNCHW
void NvDsInferContext_GetNetworkInfo(NvDsInferContextHandle handle, NvDsInferNetworkInfo *networkInfo)
Gets network input information.
unsigned int numDynamicProperties
float networkScaleFactor
Holds the normalization factor with which to scale the input pixels.
char ** opTensorFilePath
List of paths to the raw output tensor data that are going to be used to overwrite the different outp...
int useDLA
Holds a Boolean; true if DLA is to be used.
@ NvDsInferFormat_Tensor
NCHW planar.
void(* NvDsInferContextLoggingFunc)(NvDsInferContextHandle handle, unsigned int uniqueID, NvDsInferLogLevel logLevel, const char *logMessage, void *userCtx)
Type declaration for a logging callback.
@ NvDsInferNetworkType_Segmentation
Specifies a segmentation network.
void NvDsInferContext_FillLayersInfo(NvDsInferContextHandle handle, NvDsInferLayerInfo *layersInfo)
Fills an input vector with information about all of the bound layers of the inference engine in an Nv...
char int8CalibrationFilePath[_PATH_MAX]
Holds the pathname of the INT8 calibration file.
char onnxFilePath[_PATH_MAX]
Holds the pathname of the ONNX model file.
Holds detection and bounding box grouping parameters.
int useStronglyTyped
Boolean flag indicating whether to enable strongly typed network mode.
Holds the information inferred by the network on one frame.
NvDsInferNetworkType
Defines network types.
@ NvDsInferFormat_BGR
Specifies 24-bit interleaved B-G-R format.
const char * NvDsInferContext_GetDynamicProperty(const NvDsInferContextInitParams *initParams, const char *key)
Get a dynamic property value.
NvDsInferDimsCHW inputDims
Holds the input dimensions for the model.
Holds the output for all of the frames in a batch (an array of frame), and related buffer information...
NvDsInferDimsCHW inferInputDims
Inference input dimensions for runtime engine.
@ NvDsInferNetworkType_Other
Specifies other.
Holds information about one detected object.
Holds information about one classified attribute.
char ipTensorFilePath[_PATH_MAX]
Path to the raw input tensor data that is going to be used to overwrite the buffer.
char tltEncodedModelFilePath[_PATH_MAX]
Holds the pathname of the TLT encoded model file.
char customNetworkConfigFilePath[_PATH_MAX]
Holds the pathname of the configuration file for custom network creation.
NvDsInferNetworkMode networkMode
Holds an internal data format specifier used by the inference engine.
const char * NvDsInferContext_GetLabel(NvDsInferContextHandle handle, unsigned int id, unsigned int value)
Gets the string label associated with the class ID for detectors and the attribute ID and attribute v...
@ NvDsInferFormat_BGRx
Specifies 32-bit interleaved B-G-R-x format.
int NvDsInferContext_HasDynamicProperty(const NvDsInferContextInitParams *initParams, const char *key)
Check if a dynamic property exists.
NvDsInferClusterMode clusterMode
Holds the type of clustering mode.
void NvDsInferContext_ClearDynamicProperties(NvDsInferContextInitParams *initParams)
Clear all dynamic properties.
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.