group gstreamer_nvinfer_context

Defines the DeepStream inference interface API.

In C++, defines the NvDsInferContext class.

The DeepStream inference API “NvDsInfer” provides methods to initialize and deinitialize the inference engine, pre-process the input frames as required by the network, and parse the output from the raw tensor buffers.

Both C and C++ interfaces are available, with the C interface being a simple wrapper over the C++ interface.

You can create an opaque handle to an instance of the context required by the API by calling the factory function createNvDsInferContext() or NvDsInferContext_Create(). Both functions accept an instance of NvDsInferContextInitParams to initialize the context. Both let you specify a logging callback to get detailed information about failures and warnings.

Initialization parameters allow you to configure the network data type, network type (Detector, Classifier, or Other), preprocessing parameters (mean subtraction and normalization), model-related parameters like Caffe/Uff/Onnx model file paths, output layer names, etc.

Batches of frames can be queued for inferencing, using NvDsInferContext::queueInputBatch() or NvDsInferContext_QueueInputBatch(). The input frame memories must be accessible to the GPU device configured during initialization. You can provide an asynchronous callback function to return the input buffers to the caller as soon as the input is consumed.

Inference output can be dequeued using NvDsInferContext::dequeueOutputBatch() or NvDsInferContext_DequeueOutputBatch(). The order of dequeued outputs corresponds to the input queueing order. In case of failure, the output of the batch is lost. The dequeued output must be released back to the context using NvDsInferContext::releaseBatchOutput() or NvDsInferContext_ReleaseBatchOutput() to free the associated memory and return the output layer buffers for reuse by the context.

Detectors output an array of detected objects for each frame in the batch. Classifiers classify entire frames and output an array of attributes for each frame. Segmentation classifies each pixel in the frame. A special network type (Other) has been provided whose output layers are not parsed. The caller can parse the device and host output layer buffers. You can also use this network type with the Gst-infer plugin to flow the output buffers as metadata.

Other methods and functions get parsed labels from a label’s file and properties of all layers bound by the inference engine.

You can extend the Gst-nvinfer API using the custom method implementations. Refer to the Custom Method Implementations section for more details.

NvDsInferContext API common types and functions.

This section describes the common types and functions for both the C and C++ interfaces for the NvDsInferContext class.

enum NvDsInferNetworkMode

Defines internal data formats used by the inference engine.


enumerator NvDsInferNetworkMode_FP32
enumerator NvDsInferNetworkMode_INT8
enumerator NvDsInferNetworkMode_FP16
enum NvDsInferNetworkType

Defines network types.


enumerator NvDsInferNetworkType_Detector

Specifies a detector.

Detectors find objects and their coordinates, and their classes in an input frame.

enumerator NvDsInferNetworkType_Classifier

Specifies a classifier.

Classifiers classify an entire frame into one of several classes.

enumerator NvDsInferNetworkType_Segmentation

Specifies a segmentation network.

A segmentation network classifies each pixel into one of several classes.

enumerator NvDsInferNetworkType_InstanceSegmentation

Specifies a instance segmentation network.

A instance segmentation network detects objects, bounding box and mask for objects, and their classes in an input frame

enumerator NvDsInferNetworkType_Other

Specifies other.

Output layers of an “other” network are not parsed by NvDsInferContext. This is useful for networks that produce custom output. Output can be parsed by the NvDsInferContext client or can be combined with the Gst-nvinfer feature to flow output tensors as metadata.

enum NvDsInferFormat

Defines color formats.


enumerator NvDsInferFormat_RGB

Specifies 24-bit interleaved R-G-B format.

enumerator NvDsInferFormat_BGR

Specifies 24-bit interleaved B-G-R format.

enumerator NvDsInferFormat_GRAY

Specifies 8-bit Luma format.

enumerator NvDsInferFormat_RGBA

Specifies 32-bit interleaved R-G-B-A format.

enumerator NvDsInferFormat_BGRx

Specifies 32-bit interleaved B-G-R-x format.

enumerator NvDsInferFormat_Tensor

NCHW planar.

enumerator NvDsInferFormat_Unknown
enum NvDsInferTensorOrder

Defines UFF input layer orders.


enumerator NvDsInferTensorOrder_kNCHW
enumerator NvDsInferTensorOrder_kNHWC
enumerator NvDsInferTensorOrder_kNC
enumerator NvDsInferUffOrder_kNCHW
enumerator NvDsInferUffOrder_kNHWC
enumerator NvDsInferUffOrder_kNC
enum NvDsInferClusterMode

Enum for clustering mode for detectors.


typedef struct _NvDsInferContextInitParams NvDsInferContextInitParams

Holds the initialization parameters required for the NvDsInferContext interface.

typedef void (*NvDsInferContextReturnInputAsyncFunc)(void *data)

Defines a callback function type for asynchronously returning the input client buffers to the NvDsInferContext client.


typedef struct INvDsInferContext *NvDsInferContextHandle

An opaque pointer type to be used as a handle for a context instance.

typedef void (*NvDsInferContextLoggingFunc)(NvDsInferContextHandle handle, unsigned int uniqueID, NvDsInferLogLevel logLevel, const char *logMessage, void *userCtx)

Type declaration for a logging callback.

The callaback log NvDsInferContext messages.

  • [in] handle: The handle of the NvDsInferContext instance that generated the log.

  • [in] uniqueID: Unique ID of the NvDsInferContext instance that generated the log.

  • [in] logLevel: Level of the log.

  • [in] funcName: A pointer to the name of the function that generated the log.

  • [in] logMessage: A pointer to the log message string.

  • [in] userCtx: An opaque pointer to the user context, supplied when creating the NvDsInferContext instance.

void NvDsInferContext_ResetInitParams(NvDsInferContextInitParams *initParams)

Resets a context parameter structure to default values.

  • [in] initParams: A pointer to a context parameter structure.

NvDsInferContext_GetStatusName is deprecated Use NvDsInferStatus2Str instead const char * NvDsInferContext_GetStatusName (NvDsInferStatus status)

Gets the string name of the status.


A pointer to a string containing the status’s name, or NULL if the status is unrecognized. Memory is owned by the function; the caller may not free it.

  • [in] status: An inference status code.


Maximum length of a file path parameter.


Defines the maximum number of channels supported by the API for image input layers.


Defines the maximum length of string parameters.


Defines the maximum batch size supported by nvdsinfer.


Defines the minimum number of sets of output buffers that must be allocated.


NvDsInferContext API C-interface

This section describes the C interface for the NvDsInferContext class.

NvDsInferStatus NvDsInferContext_Create(NvDsInferContextHandle *handle, NvDsInferContextInitParams *initParams, void *userCtx, NvDsInferContextLoggingFunc logFunc)

Creates a new NvDsInferContext object with specified initialization parameters.


NVDSINFER_SUCCESS if creation was successful, or an error status otherwise.

  • [out] handle: A pointer to an NvDsInferContext handle.

  • [in] initParams: A pointer to a parameter structure to be used to initialize the context.

  • [in] userCtx: A pointer to an opaque user context, with callbacks, generated by the NvDsInferContext instance.

  • [in] logFunc: A log callback for the instance.

void NvDsInferContext_Destroy(NvDsInferContextHandle handle)

Destroys an NvDsInferContext instance and releases its resources.

  • [in] handle: The handle to the NvDsInferContext instance to be destroyed.

NvDsInferStatus NvDsInferContext_QueueInputBatch(NvDsInferContextHandle handle, NvDsInferContextBatchInput *batchInput)

Queues a batch of input frames for preprocessing and inferencing.


NvDsInferContext::queueInputBatch() for details.


NVDSINFER_SUCCESS if preprocessing and queueing were successful, or an error status otherwise.

  • [in] handle: A handle to an NvDsInferContext instance.

  • [in] batchInput: A reference to a batch input structure.

NvDsInferStatus NvDsInferContext_DequeueOutputBatch(NvDsInferContextHandle handle, NvDsInferContextBatchOutput *batchOutput)

Dequeues output for a batch of frames.


NvDsInferContext::dequeueOutputBatch() for details.


NVDSINFER_SUCCESS if dequeueing was successful, or an error status otherwise.

  • [in] handle: A handle to an NvDsInferContext instance.

  • [inout] batchOutput: A reference to the batch output structure to which output is to be appended.

void NvDsInferContext_ReleaseBatchOutput(NvDsInferContextHandle handle, NvDsInferContextBatchOutput *batchOutput)

Frees the memory associated with the batch output and releases the set of host buffers back to the context for reuse.


void NvDsInferContext_GetNetworkInfo(NvDsInferContextHandle handle, NvDsInferNetworkInfo *networkInfo)

Gets network input information.

  • [in] handle: A handle to an NvDsInferContext instance.

  • [inout] networkInfo: A pointer to an NvDsInferNetworkInfo structure.

unsigned int NvDsInferContext_GetNumLayersInfo(NvDsInferContextHandle handle)

Gets the number of the bound layers of the inference engine in an NvDsInferContext instance.


The number of bound layers of the inference engine.

  • [in] handle: A handle to an NvDsInferContext instance.

void NvDsInferContext_FillLayersInfo(NvDsInferContextHandle handle, NvDsInferLayerInfo *layersInfo)

Fills an input vector with information about all of the bound layers of the inference engine in an NvDsInferContext instance.

The size of the array must be at least the value returned by NvDsInferContext_GetNumLayersInfo().

  • [in] handle: A handle to an NvDsInferContext instance.

  • [inout] layersInfo: A pointer to an array of NvDsInferLayerInfo structures to be filled by the function.

const char *NvDsInferContext_GetLabel(NvDsInferContextHandle handle, unsigned int id, unsigned int value)

Gets the string label associated with the class ID for detectors and the attribute ID and attribute value for classifiers.

The string is owned by the context; the caller may not modify or free it.


A pointer to a string label. The memory is owned by the context.

  • [in] handle: A handle to an NvDsInferContext instance.

  • [in] id: Class ID for detectors, or attribute ID for classifiers.

  • [in] value: Attribute value for classifiers; set to 0 for detectors.

struct NvDsInferDetectionParams
#include <nvdsinfer_context.h>

Holds detection and bounding box grouping parameters.

struct _NvDsInferContextInitParams
#include <nvdsinfer_context.h>

Holds the initialization parameters required for the NvDsInferContext interface.

struct NvDsInferContextBatchInput
#include <nvdsinfer_context.h>

Holds information about one batch to be inferred.

struct NvDsInferObject
#include <nvdsinfer_context.h>

Holds information about one detected object.

struct NvDsInferDetectionOutput
#include <nvdsinfer_context.h>

Holds information on all objects detected by a detector network in one frame.

struct NvDsInferClassificationOutput
#include <nvdsinfer_context.h>

Holds information on all attributes classifed by a classifier network for one frame.

struct NvDsInferSegmentationOutput
#include <nvdsinfer_context.h>

Holds information parsed from segmentation network output for one frame.

struct NvDsInferFrameOutput
#include <nvdsinfer_context.h>

Holds the information inferred by the network on one frame.

struct NvDsInferContextBatchOutput
#include <nvdsinfer_context.h>

Holds the output for all of the frames in a batch (an array of frame), and related buffer information.