NVIDIA DeepStream SDK API Reference

6.1 Release

 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
nvdsinfer_custom_impl.h File Reference

Detailed Description

Defines specification for Custom Method Implementations for custom models

Description: This file defines the API that implements custom methods required by the GStreamer Gst-nvinfer plugin to infer using custom models.

All custom functionality must be implemented in an independent shared library. The library is dynamically loaded (using dlopen()) by the plugin. It implements custom methods which are called as required. The custom library can be specified in the Gst-nvinfer configuration file by the custom-lib-name property.

Custom Detector Output Parsing Function

This section describes the custom bounding box parsing function for custom detector models.

The custom parsing function should be of the type NvDsInferParseCustomFunc. The custom parsing function can be specified in the Gst-nvinfer configuration file by the properties parse-bbox-func-name (name of the parsing function) and custom-lib-name. `parse-func must be set to 0.

The Gst-nvinfer plugin loads the library and looks for the custom parsing function symbol. The function is called after each inference call is executed.

You can call the macro CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE() after defining the function to validate the function definition.

TensorRT Plugin Factory interface for DeepStream

For the Caffe model, the library must implement NvDsInferPluginFactoryCaffeGet(). During model parsing, "nvinfer" looks for that function' symbol in the custom library. If symbol is found, the plugin calls that function to get a pointer to the PluginFactory instance required for parsing.

If the IPluginFactory is needed during deserialization of CUDA engines, the library must implement NvDsInferPluginFactoryRuntimeGet().

Each Get function has a corresponding Destroy function which is called, if defined, when the returned PluginFactory is to be destroyed.

A library that implements this interface must use the same function names as the header file. Gst-nvinfer dynamically loads the library and looks for the same symbol names.

See the FasterRCNN sample provided with the SDK for a sample implementation of the interface.

Input layer initialization

By default, Gst-nvinfer works with networks having only one input layer for video frames. If a network has more than one input layer, the custom library can implement the NvDsInferInitializeInputLayers interface for initializing the other input layers. Gst-nvinfer assumes that the other input layers have static input information, and hence this method is called only once before the first inference.

See the FasterRCNN sample provided with the SDK for a sample implementation of the interface.

Interface for building Custom Networks

The "nvinfer" plugin supports two interfaces for to create and build custom networks not directly supported by nvinfer.

  • IModelParser / NvDsInferCreateModelParser interface
  • NvDsInferEngineCreateCustomFunc interface

In case of IModelParser / NvDsInferCreateModelParser interface, the custom library must derive and implement IModelParser, an interface to parse the custom networks and build the TensorRT network (nvinfer1::INetworkDefinition). The "nvinfer" plugin will then use this TensorRT network to build the inference engine. The plugin will look for the symbol "NvDsInferCreateModelParser" in the library and call the function to get an instance of the model parser implementation from the library.

Alternatively, you can use the custom engine creation function to build networks that are not natively supported by nvinfer. The function must be of the type NvDsInferEngineCreateCustomFunc. You can specify it in the nvinfer element configuration file using the property engine-create-func-name (name of the engine creation function) in addition to custom-lib-name.

The nvinfer plugin loads the custom library dynamically and looks for the engine creation symbol. The function is called only once during initialization of the nvinfer plugin. The function must build and return the CudaEngine interface using the supplied nvinfer1::IBuilder instance. The builder instance is already configured with properties like MaxBatchSize, MaxWorkspaceSize, INT8/FP16 precision parameters, etc. The builder instance is managed by nvinfer, and the function may not destroy it.

You can call the macro CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE() after the function definition to validate the function definition.

Refer to the Yolo sample provided with the SDK for sample implementation of both the interfaces.

Definition in file nvdsinfer_custom_impl.h.

Go to the source code of this file.

Data Structures

struct  NvDsInferParseDetectionParams
 Holds the detection parameters required for parsing objects. More...
 
union  NvDsInferPluginFactoryCaffe
 Holds a pointer to a heap-allocated Plugin Factory object required during Caffe model parsing. More...
 

Macros

#define CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(customParseFunc)
 Validates a custom parser function definition. More...
 
#define CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE(customParseFunc)
 Validates a custom parser function definition. More...
 
#define CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE(customParseFunc)
 Validates the classifier custom parser function definition. More...
 
#define CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE(customEngineCreateFunc)
 A macro that validates a custom engine creator function definition. More...
 

Typedefs

typedef bool(* NvDsInferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList)
 Type definition for the custom bounding box parsing function. More...
 
typedef bool(* NvDsInferInstanceMaskParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferInstanceMaskInfo > &objectList)
 Type definition for the custom bounding box and instance mask parsing function. More...
 
typedef bool(* NvDsInferClassiferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString)
 Type definition for the custom classifier output parsing function. More...
 
typedef struct
_NvDsInferContextInitParams 
NvDsInferContextInitParams
 
typedef bool(* NvDsInferEngineCreateCustomFunc )(nvinfer1::IBuilder *const builder, nvinfer1::IBuilderConfig *const builderConfig, const NvDsInferContextInitParams *const initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine)
 Type definition for functions that build and return a CudaEngine for custom models. More...
 

Enumerations

enum  NvDsInferPluginFactoryType { PLUGIN_FACTORY_V2 = 2 }
 Specifies the type of the Plugin Factory. More...
 

Functions

bool NvDsInferPluginFactoryCaffeGet (NvDsInferPluginFactoryCaffe &pluginFactory, NvDsInferPluginFactoryType &type)
 Gets a new instance of a Plugin Factory interface to be used during parsing of Caffe models. More...
 
void NvDsInferPluginFactoryCaffeDestroy (NvDsInferPluginFactoryCaffe &pluginFactory)
 Destroys a Plugin Factory instance created by NvDsInferPluginFactoryCaffeGet(). More...
 
bool NvDsInferPluginFactoryRuntimeGet (nvinfer1::IPluginFactory *&pluginFactory)
 Returns a new instance of a Plugin Factory interface to be used during parsing deserialization of CUDA engines. More...
 
void NvDsInferPluginFactoryRuntimeDestroy (nvinfer1::IPluginFactory *pluginFactory)
 Destroys a Plugin Factory instance created by NvDsInferPluginFactoryRuntimeGet(). More...
 
bool NvDsInferInitializeInputLayers (std::vector< NvDsInferLayerInfo > const &inputLayersInfo, NvDsInferNetworkInfo const &networkInfo, unsigned int maxBatchSize)
 Initializes the input layers for inference. More...
 
bool NvDsInferCudaEngineGet (nvinfer1::IBuilder *builder, NvDsInferContextInitParams *initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine) __attribute__((deprecated("Use 'engine-create-func-name' config parameter instead")))
 The NvDsInferCudaEngineGet interface has been deprecated and has been replaced by NvDsInferEngineCreateCustomFunc function. More...
 
IModelParser * NvDsInferCreateModelParser (const NvDsInferContextInitParams *initParams)
 Create a customized neural network parser for user-defined models. More...
 

Macro Definition Documentation

#define CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE (   customParseFunc)
Value:
static void checkFunc_ ## customParseFunc (NvDsInferClassiferParseCustomFunc func = customParseFunc) \
{ checkFunc_ ## customParseFunc (); }; \
extern "C" bool customParseFunc (std::vector<NvDsInferLayerInfo> const &outputLayersInfo, \
NvDsInferNetworkInfo const &networkInfo, \
float classifierThreshold, \
std::vector<NvDsInferAttribute> &attrList, \
std::string &descString);
bool(* NvDsInferClassiferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString)
Type definition for the custom classifier output parsing function.
Holds information about the model network.
Definition: nvdsinfer.h:109

Validates the classifier custom parser function definition.

Must be called after defining the function.

Definition at line 292 of file nvdsinfer_custom_impl.h.

#define CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE (   customEngineCreateFunc)
Value:
static void checkFunc_ ## customEngineCreateFunc (NvDsInferEngineCreateCustomFunc = customEngineCreateFunc) \
{ checkFunc_ ## customEngineCreateFunc(); }; \
extern "C" bool customEngineCreateFunc ( \
nvinfer1::IBuilder * const builder, \
nvinfer1::IBuilderConfig * const builderConfig, \
const NvDsInferContextInitParams * const initParams, \
nvinfer1::DataType dataType, \
nvinfer1::ICudaEngine *& cudaEngine);
bool(* NvDsInferEngineCreateCustomFunc)(nvinfer1::IBuilder *const builder, nvinfer1::IBuilderConfig *const builderConfig, const NvDsInferContextInitParams *const initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine)
Type definition for functions that build and return a CudaEngine for custom models.
Holds the initialization parameters required for the NvDsInferContext interface.

A macro that validates a custom engine creator function definition.

Call this macro after the function is defined.

Definition at line 344 of file nvdsinfer_custom_impl.h.

#define CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE (   customParseFunc)
Value:
static void checkFunc_ ## customParseFunc (NvDsInferInstanceMaskParseCustomFunc func = customParseFunc) \
{ checkFunc_ ## customParseFunc (); }; \
extern "C" bool customParseFunc (std::vector<NvDsInferLayerInfo> const &outputLayersInfo, \
NvDsInferNetworkInfo const &networkInfo, \
NvDsInferParseDetectionParams const &detectionParams, \
std::vector<NvDsInferInstanceMaskInfo> &objectList);
Holds the detection parameters required for parsing objects.
bool(* NvDsInferInstanceMaskParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferInstanceMaskInfo > &objectList)
Type definition for the custom bounding box and instance mask parsing function.
Holds information about the model network.
Definition: nvdsinfer.h:109

Validates a custom parser function definition.

Must be called after defining the function.

Definition at line 260 of file nvdsinfer_custom_impl.h.

#define CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE (   customParseFunc)
Value:
static void checkFunc_ ## customParseFunc (NvDsInferParseCustomFunc func = customParseFunc) \
{ checkFunc_ ## customParseFunc (); }; \
extern "C" bool customParseFunc (std::vector<NvDsInferLayerInfo> const &outputLayersInfo, \
NvDsInferNetworkInfo const &networkInfo, \
NvDsInferParseDetectionParams const &detectionParams, \
std::vector<NvDsInferObjectDetectionInfo> &objectList);
Holds the detection parameters required for parsing objects.
bool(* NvDsInferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList)
Type definition for the custom bounding box parsing function.
Holds information about the model network.
Definition: nvdsinfer.h:109

Validates a custom parser function definition.

Must be called after defining the function.

Definition at line 231 of file nvdsinfer_custom_impl.h.

Typedef Documentation

typedef bool(* NvDsInferClassiferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString)

Type definition for the custom classifier output parsing function.

Parameters
[in]outputLayersInfoA vector containing information on the output layers of the model.
[in]networkInfoNetwork information.
[in]classifierThresholdClassification confidence threshold.
[out]attrListA reference to a vector in which the function is to add the parsed attributes.
[out]descStringA reference to a string object in which the function may place a description string.

Definition at line 281 of file nvdsinfer_custom_impl.h.

typedef bool(* NvDsInferEngineCreateCustomFunc)(nvinfer1::IBuilder *const builder, nvinfer1::IBuilderConfig *const builderConfig, const NvDsInferContextInitParams *const initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine)

Type definition for functions that build and return a CudaEngine for custom models.

Deprecated:
The NvDsInferCudaEngineGet interface is replaced by NvDsInferEngineCreateCustomFunc().

The implementation of this interface must build the nvinfer1::ICudaEngine instance using nvinfer1::IBuilder instance builder. The builder instance is managed by the caller; the implementation must not destroy it.

Properties like MaxBatchSize, MaxWorkspaceSize, INT8/FP16 precision parameters, and DLA parameters (if applicable) are set on the builder and builderConfig before it is passed to the interface. The corresponding Get functions of the nvinfer1::IBuilder and nvinfer1::IBuilderConfig interface can be used to get the property values.

The implementation must make sure not to reduce the MaxBatchSize of the returned CudaEngine.

Parameters
[in]builderAn nvinfer1::IBuilder instance.
[in]builderConfigA nvinfer1::IBuilderConfig instance.
[in]initParamsA pointer to the structure to be used for initializing the NvDsInferContext instance.
[in]dataTypeData precision.
[out]cudaEngineA pointer to a location where the function is to store a reference to the nvinfer1::ICudaEngine instance it has built.
Returns
True if the engine build was successful, or false otherwise.

Definition at line 334 of file nvdsinfer_custom_impl.h.

typedef bool(* NvDsInferInstanceMaskParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferInstanceMaskInfo > &objectList)

Type definition for the custom bounding box and instance mask parsing function.

Parameters
[in]outputLayersInfoA vector containing information on the output layers of the model.
[in]networkInfoNetwork information.
[in]detectionParamsDetection parameters required for parsing objects.
[out]objectListA reference to a vector in which the function is to add parsed objects and instance mask.

Definition at line 250 of file nvdsinfer_custom_impl.h.

typedef bool(* NvDsInferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList)

Type definition for the custom bounding box parsing function.

Parameters
[in]outputLayersInfoA vector containing information on the output layers of the model.
[in]networkInfoNetwork information.
[in]detectionParamsDetection parameters required for parsing objects.
[out]objectListA reference to a vector in which the function is to add parsed objects.

Definition at line 221 of file nvdsinfer_custom_impl.h.

Enumeration Type Documentation

Specifies the type of the Plugin Factory.

Enumerator
PLUGIN_FACTORY_V2 

Specifies nvcaffeparser1::IPluginFactoryV2.

Used only for Caffe models.

Definition at line 357 of file nvdsinfer_custom_impl.h.

Function Documentation

IModelParser* NvDsInferCreateModelParser ( const NvDsInferContextInitParams initParams)

Create a customized neural network parser for user-defined models.

User need to implement a new IModelParser class with initParams referring to any model path and/or customNetworkConfigFilePath.

Parameters
[in]initParamswith model paths or config files.
Returns
Instance of IModelParser implementation.
bool NvDsInferCudaEngineGet ( nvinfer1::IBuilder *  builder,
NvDsInferContextInitParams initParams,
nvinfer1::DataType  dataType,
nvinfer1::ICudaEngine *&  cudaEngine 
)

The NvDsInferCudaEngineGet interface has been deprecated and has been replaced by NvDsInferEngineCreateCustomFunc function.

bool NvDsInferInitializeInputLayers ( std::vector< NvDsInferLayerInfo > const &  inputLayersInfo,
NvDsInferNetworkInfo const &  networkInfo,
unsigned int  maxBatchSize 
)

Initializes the input layers for inference.

This function is called only once during before the first inference call.

Parameters
[in]inputLayersInfoA reference to a vector containing information on the input layers of the model. This does not contain the NvDsInferLayerInfo structure for the layer for video frame input.
[in]networkInfoA reference to anetwork information structure.
[in]maxBatchSizeThe maximum batch size for inference. The input layer buffers are allocated for this batch size.
Returns
True if input layers are initialized successfully, or false otherwise.
void NvDsInferPluginFactoryCaffeDestroy ( NvDsInferPluginFactoryCaffe pluginFactory)

Destroys a Plugin Factory instance created by NvDsInferPluginFactoryCaffeGet().

Parameters
[in]pluginFactoryA reference to the union that contains a pointer to the Plugin Factory instance returned by NvDsInferPluginFactoryCaffeGet().
bool NvDsInferPluginFactoryCaffeGet ( NvDsInferPluginFactoryCaffe pluginFactory,
NvDsInferPluginFactoryType type 
)

Gets a new instance of a Plugin Factory interface to be used during parsing of Caffe models.

The function must set the correct type and the correct field in the pluginFactory union, based on the type of the Plugin Factory, (i.e. one of pluginFactory, pluginFactoryExt, or pluginFactoryV2).

Parameters
[out]pluginFactoryA reference to the union that contains a pointer to the Plugin Factory object.
[out]typeSpecifies the type of pluginFactory, i.e. which member the pluginFactory union is valid.
Returns
True if the Plugin Factory was created successfully, or false otherwise.
void NvDsInferPluginFactoryRuntimeDestroy ( nvinfer1::IPluginFactory *  pluginFactory)

Destroys a Plugin Factory instance created by NvDsInferPluginFactoryRuntimeGet().

Parameters
[in]pluginFactoryA pointer to the Plugin Factory instance returned by NvDsInferPluginFactoryRuntimeGet().
bool NvDsInferPluginFactoryRuntimeGet ( nvinfer1::IPluginFactory *&  pluginFactory)

Returns a new instance of a Plugin Factory interface to be used during parsing deserialization of CUDA engines.

Parameters
[out]pluginFactoryA reference to nvinfer1::IPluginFactory* in which the function is to place a pointer to the instance.
Returns
True if the Plugin Factory was created successfully, or false otherwise.