NVIDIA DeepStream SDK API Reference6.0 Release |
Defines specification for Custom Method Implementations for custom models
Description: This file defines the API that implements custom methods required by the GStreamer Gst-nvinfer plugin to infer using custom models.
All custom functionality must be implemented in an independent shared library. The library is dynamically loaded (using dlopen()) by the plugin. It implements custom methods which are called as required. The custom library can be specified in the Gst-nvinfer configuration file by the custom-lib-name
property.
This section describes the custom bounding box parsing function for custom detector models.
The custom parsing function should be of the type NvDsInferParseCustomFunc
. The custom parsing function can be specified in the Gst-nvinfer configuration file by the properties parse-bbox-func-name
(name of the parsing function) and custom-lib-name
. `parse-func must be set to 0.
The Gst-nvinfer plugin loads the library and looks for the custom parsing function symbol. The function is called after each inference call is executed.
You can call the macro CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE() after defining the function to validate the function definition.
For the Caffe model, the library must implement NvDsInferPluginFactoryCaffeGet(). During model parsing, "nvinfer" looks for that function' symbol in the custom library. If symbol is found, the plugin calls that function to get a pointer to the PluginFactory instance required for parsing.
If the IPluginFactory is needed during deserialization of CUDA engines, the library must implement NvDsInferPluginFactoryRuntimeGet().
Each Get function has a corresponding Destroy function which is called, if defined, when the returned PluginFactory is to be destroyed.
A library that implements this interface must use the same function names as the header file. Gst-nvinfer dynamically loads the library and looks for the same symbol names.
See the FasterRCNN sample provided with the SDK for a sample implementation of the interface.
By default, Gst-nvinfer works with networks having only one input layer for video frames. If a network has more than one input layer, the custom library can implement the NvDsInferInitializeInputLayers interface for initializing the other input layers. Gst-nvinfer assumes that the other input layers have static input information, and hence this method is called only once before the first inference.
See the FasterRCNN sample provided with the SDK for a sample implementation of the interface.
The "nvinfer" plugin supports two interfaces for to create and build custom networks not directly supported by nvinfer.
In case of IModelParser / NvDsInferCreateModelParser interface, the custom library must derive and implement IModelParser, an interface to parse the custom networks and build the TensorRT network (nvinfer1::INetworkDefinition). The "nvinfer" plugin will then use this TensorRT network to build the inference engine. The plugin will look for the symbol "NvDsInferCreateModelParser" in the library and call the function to get an instance of the model parser implementation from the library.
Alternatively, you can use the custom engine creation function to build networks that are not natively supported by nvinfer. The function must be of the type NvDsInferEngineCreateCustomFunc. You can specify it in the nvinfer element configuration file using the property engine-create-func-name
(name of the engine creation function) in addition to custom-lib-name
.
The nvinfer plugin loads the custom library dynamically and looks for the engine creation symbol. The function is called only once during initialization of the nvinfer plugin. The function must build and return the CudaEngine
interface using the supplied nvinfer1::IBuilder instance. The builder instance is already configured with properties like MaxBatchSize, MaxWorkspaceSize, INT8/FP16 precision parameters, etc. The builder instance is managed by nvinfer, and the function may not destroy it.
You can call the macro CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE() after the function definition to validate the function definition.
Refer to the Yolo sample provided with the SDK for sample implementation of both the interfaces.
Definition in file nvdsinfer_custom_impl.h.
Go to the source code of this file.
Data Structures | |
struct | NvDsInferParseDetectionParams |
Holds the detection parameters required for parsing objects. More... | |
union | NvDsInferPluginFactoryCaffe |
Holds a pointer to a heap-allocated Plugin Factory object required during Caffe model parsing. More... | |
Macros | |
#define | CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(customParseFunc) |
Validates a custom parser function definition. More... | |
#define | CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE(customParseFunc) |
Validates a custom parser function definition. More... | |
#define | CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE(customParseFunc) |
Validates the classifier custom parser function definition. More... | |
#define | CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE(customEngineCreateFunc) |
A macro that validates a custom engine creator function definition. More... | |
Typedefs | |
typedef bool(* | NvDsInferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList) |
Type definition for the custom bounding box parsing function. More... | |
typedef bool(* | NvDsInferInstanceMaskParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferInstanceMaskInfo > &objectList) |
Type definition for the custom bounding box and instance mask parsing function. More... | |
typedef bool(* | NvDsInferClassiferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString) |
Type definition for the custom classifier output parsing function. More... | |
typedef struct _NvDsInferContextInitParams | NvDsInferContextInitParams |
typedef bool(* | NvDsInferEngineCreateCustomFunc )(nvinfer1::IBuilder *const builder, nvinfer1::IBuilderConfig *const builderConfig, const NvDsInferContextInitParams *const initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine) |
Type definition for functions that build and return a CudaEngine for custom models. More... | |
Enumerations | |
enum | NvDsInferPluginFactoryType { PLUGIN_FACTORY_V2 = 2 } |
Specifies the type of the Plugin Factory. More... | |
Functions | |
bool | NvDsInferPluginFactoryCaffeGet (NvDsInferPluginFactoryCaffe &pluginFactory, NvDsInferPluginFactoryType &type) |
Gets a new instance of a Plugin Factory interface to be used during parsing of Caffe models. More... | |
void | NvDsInferPluginFactoryCaffeDestroy (NvDsInferPluginFactoryCaffe &pluginFactory) |
Destroys a Plugin Factory instance created by NvDsInferPluginFactoryCaffeGet(). More... | |
bool | NvDsInferPluginFactoryRuntimeGet (nvinfer1::IPluginFactory *&pluginFactory) |
Returns a new instance of a Plugin Factory interface to be used during parsing deserialization of CUDA engines. More... | |
void | NvDsInferPluginFactoryRuntimeDestroy (nvinfer1::IPluginFactory *pluginFactory) |
Destroys a Plugin Factory instance created by NvDsInferPluginFactoryRuntimeGet(). More... | |
bool | NvDsInferInitializeInputLayers (std::vector< NvDsInferLayerInfo > const &inputLayersInfo, NvDsInferNetworkInfo const &networkInfo, unsigned int maxBatchSize) |
Initializes the input layers for inference. More... | |
bool | NvDsInferCudaEngineGet (nvinfer1::IBuilder *builder, NvDsInferContextInitParams *initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine) __attribute__((deprecated("Use 'engine-create-func-name' config parameter instead"))) |
The NvDsInferCudaEngineGet interface has been deprecated and has been replaced by NvDsInferEngineCreateCustomFunc function. More... | |
IModelParser * | NvDsInferCreateModelParser (const NvDsInferContextInitParams *initParams) |
Create a customized neural network parser for user-defined models. More... | |
#define CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE | ( | customParseFunc | ) |
Validates the classifier custom parser function definition.
Must be called after defining the function.
Definition at line 292 of file nvdsinfer_custom_impl.h.
#define CHECK_CUSTOM_ENGINE_CREATE_FUNC_PROTOTYPE | ( | customEngineCreateFunc | ) |
A macro that validates a custom engine creator function definition.
Call this macro after the function is defined.
Definition at line 344 of file nvdsinfer_custom_impl.h.
#define CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE | ( | customParseFunc | ) |
Validates a custom parser function definition.
Must be called after defining the function.
Definition at line 260 of file nvdsinfer_custom_impl.h.
#define CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE | ( | customParseFunc | ) |
Validates a custom parser function definition.
Must be called after defining the function.
Definition at line 231 of file nvdsinfer_custom_impl.h.
typedef bool(* NvDsInferClassiferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString) |
Type definition for the custom classifier output parsing function.
[in] | outputLayersInfo | A vector containing information on the output layers of the model. |
[in] | networkInfo | Network information. |
[in] | classifierThreshold | Classification confidence threshold. |
[out] | attrList | A reference to a vector in which the function is to add the parsed attributes. |
[out] | descString | A reference to a string object in which the function may place a description string. |
Definition at line 281 of file nvdsinfer_custom_impl.h.
typedef struct _NvDsInferContextInitParams NvDsInferContextInitParams |
Definition at line 301 of file nvdsinfer_custom_impl.h.
typedef bool(* NvDsInferEngineCreateCustomFunc)(nvinfer1::IBuilder *const builder, nvinfer1::IBuilderConfig *const builderConfig, const NvDsInferContextInitParams *const initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine) |
Type definition for functions that build and return a CudaEngine
for custom models.
The implementation of this interface must build the nvinfer1::ICudaEngine instance using nvinfer1::IBuilder instance builder. The builder instance is managed by the caller; the implementation must not destroy it.
Properties like MaxBatchSize, MaxWorkspaceSize, INT8/FP16 precision parameters, and DLA parameters (if applicable) are set on the builder and builderConfig before it is passed to the interface. The corresponding Get functions of the nvinfer1::IBuilder and nvinfer1::IBuilderConfig interface can be used to get the property values.
The implementation must make sure not to reduce the MaxBatchSize of the returned CudaEngine
.
[in] | builder | An nvinfer1::IBuilder instance. |
[in] | builderConfig | A nvinfer1::IBuilderConfig instance. |
[in] | initParams | A pointer to the structure to be used for initializing the NvDsInferContext instance. |
[in] | dataType | Data precision. |
[out] | cudaEngine | A pointer to a location where the function is to store a reference to the nvinfer1::ICudaEngine instance it has built. |
Definition at line 334 of file nvdsinfer_custom_impl.h.
typedef bool(* NvDsInferInstanceMaskParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferInstanceMaskInfo > &objectList) |
Type definition for the custom bounding box and instance mask parsing function.
[in] | outputLayersInfo | A vector containing information on the output layers of the model. |
[in] | networkInfo | Network information. |
[in] | detectionParams | Detection parameters required for parsing objects. |
[out] | objectList | A reference to a vector in which the function is to add parsed objects and instance mask. |
Definition at line 250 of file nvdsinfer_custom_impl.h.
typedef bool(* NvDsInferParseCustomFunc)(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList) |
Type definition for the custom bounding box parsing function.
[in] | outputLayersInfo | A vector containing information on the output layers of the model. |
[in] | networkInfo | Network information. |
[in] | detectionParams | Detection parameters required for parsing objects. |
[out] | objectList | A reference to a vector in which the function is to add parsed objects. |
Definition at line 221 of file nvdsinfer_custom_impl.h.
Specifies the type of the Plugin Factory.
Enumerator | |
---|---|
PLUGIN_FACTORY_V2 |
Specifies nvcaffeparser1::IPluginFactoryV2. Used only for Caffe models. |
Definition at line 357 of file nvdsinfer_custom_impl.h.
IModelParser* NvDsInferCreateModelParser | ( | const NvDsInferContextInitParams * | initParams | ) |
Create a customized neural network parser for user-defined models.
User need to implement a new IModelParser class with initParams referring to any model path and/or customNetworkConfigFilePath.
[in] | initParams | with model paths or config files. |
bool NvDsInferCudaEngineGet | ( | nvinfer1::IBuilder * | builder, |
NvDsInferContextInitParams * | initParams, | ||
nvinfer1::DataType | dataType, | ||
nvinfer1::ICudaEngine *& | cudaEngine | ||
) |
The NvDsInferCudaEngineGet interface has been deprecated and has been replaced by NvDsInferEngineCreateCustomFunc function.
bool NvDsInferInitializeInputLayers | ( | std::vector< NvDsInferLayerInfo > const & | inputLayersInfo, |
NvDsInferNetworkInfo const & | networkInfo, | ||
unsigned int | maxBatchSize | ||
) |
Initializes the input layers for inference.
This function is called only once during before the first inference call.
[in] | inputLayersInfo | A reference to a vector containing information on the input layers of the model. This does not contain the NvDsInferLayerInfo structure for the layer for video frame input. |
[in] | networkInfo | A reference to anetwork information structure. |
[in] | maxBatchSize | The maximum batch size for inference. The input layer buffers are allocated for this batch size. |
void NvDsInferPluginFactoryCaffeDestroy | ( | NvDsInferPluginFactoryCaffe & | pluginFactory | ) |
Destroys a Plugin Factory instance created by NvDsInferPluginFactoryCaffeGet().
[in] | pluginFactory | A reference to the union that contains a pointer to the Plugin Factory instance returned by NvDsInferPluginFactoryCaffeGet(). |
bool NvDsInferPluginFactoryCaffeGet | ( | NvDsInferPluginFactoryCaffe & | pluginFactory, |
NvDsInferPluginFactoryType & | type | ||
) |
Gets a new instance of a Plugin Factory interface to be used during parsing of Caffe models.
The function must set the correct type and the correct field in the pluginFactory union, based on the type of the Plugin Factory, (i.e. one of pluginFactory, pluginFactoryExt, or pluginFactoryV2).
[out] | pluginFactory | A reference to the union that contains a pointer to the Plugin Factory object. |
[out] | type | Specifies the type of pluginFactory, i.e. which member the pluginFactory union is valid. |
void NvDsInferPluginFactoryRuntimeDestroy | ( | nvinfer1::IPluginFactory * | pluginFactory | ) |
Destroys a Plugin Factory instance created by NvDsInferPluginFactoryRuntimeGet().
[in] | pluginFactory | A pointer to the Plugin Factory instance returned by NvDsInferPluginFactoryRuntimeGet(). |
bool NvDsInferPluginFactoryRuntimeGet | ( | nvinfer1::IPluginFactory *& | pluginFactory | ) |
Returns a new instance of a Plugin Factory interface to be used during parsing deserialization of CUDA engines.
[out] | pluginFactory | A reference to nvinfer1::IPluginFactory* in which the function is to place a pointer to the instance. |