NVIDIA DeepStream SDK API Reference

4.0.1 Release

 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Groups
nvdsinfer_custom_impl.h File Reference

Detailed Description

NVIDIA DeepStream Specification for Custom Method Implementations for custom models

Description: This file specifies the APIs and function definitions to implement custom methods required by DeepStream nvinfer GStreamer plugin to infer using custom models.

All the custom functionality must be implemented in an independent shared library. This library will be dynamically loaded (dlopen()) by "nvinfer" and implemented custom methods will be called as required. The custom library can be specified in the nvinfer element configuration file through the custom-lib-name property.

Custom Detector Output Parsing Function

This section describes the custom bounding box parsing function for custom detector models.

The custom parsing function should be of the type NvDsInferParseCustomFunc. The custom parsing function can be specified in the nvinfer element configuration file using the property parse-bbox-func-name (Name of the parsing function) in addition to custom-lib-name and setting parse-func to 0 (custom).

The nvinfer plugin loads the library and looks for the custom parsing function symbol. The function is called after each inference call is executed.

The macro CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE() can be called after the function definition to validate the function definition.

TensorRT Plugin Factory interface for DeepStream

Based on the type of the model (Caffe or UFF), the library must implement one of the two functions NvDsInferPluginFactoryCaffeGet or NvDsInferPluginFactoryUffGet. During model parsing, the DeepStream "nvinfer" plugin will look for one of the two symbols in the custom library based on the model framework. If the symbol is found, the plugin will call the "Get" function to get a pointer to PluginFactory instance required for parsing.

If the IPluginFactory needs to be used during deserialization of cuda engines, the library must implement NvDsInferPluginFactoryRuntimeGet.

All the three Get functions have a corresponding Destroy functions which will be called if defined when the returned PluginFactory need to be destroyed.

Libraries implementing this interface must use the same function names as in the header file. The "nvinfer" plugin will dynamically load the library and look for the same symbol names.

Refer the FasterRCNN sample provided with the SDK for a sample implementation of the interface.

Input Layers Initialization

By default "nvinfer" works with networks having only one input layer for video frames. If the network has more than one input layer, the custom library can implement NvDsInferInitializeInputLayers interface for initializing the other input layers. "nvinfer" assumes that the other input layers have static input information and hence this method is called only once before the first inference.

Refer the FasterRCNN sample provided with the SDK for a sample implementation of the interface.

Interface for building Custom Networks

The NvDsInferCudaEngineGet interface can be used to create and build custom networks not directly supported by nvinfer.

The "nvinfer" plugin will dynamically load the custom library and look for the "NvDsInferCudaEngineGet" symbol.

The interface implementation shall build and return the CudaEngine interface using the supplied nvinfer1::IBuilder instance. The builder instance will already be configured with properties like MaxBatchSize, MaxWorkspaceSize, INT8/FP16 precision parameters etc. The builder instance is managed by "nvinfer" and should not be destroyed by the implementation.

The path to the config file for nvinfer instance is supplied to the interface. Custom properties required by the model can be added to the config file and parsed in the interface implementation.

Definition in file nvdsinfer_custom_impl.h.

Go to the source code of this file.

Data Structures

struct  NvDsInferParseDetectionParams
 Holds the detection parameters required for parsing objects. More...
 
union  NvDsInferPluginFactoryCaffe
 Holds the pointer to a heap allocated Plugin Factory object required during Caffemodel parsing. More...
 
union  NvDsInferPluginFactoryUff
 Holds the pointer to a heap allocated Plugin Factory object required during UFF model parsing. More...
 

Macros

#define CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(customParseFunc)
 Macro to validate the custom parser function definition. More...
 
#define CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE(customParseFunc)
 Macro to validate the classifier custom parser function definition. More...
 

Typedefs

typedef bool(* NvDsInferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector< NvDsInferObjectDetectionInfo > &objectList)
 Function definition for the custom bounding box parsing function. More...
 
typedef bool(* NvDsInferClassiferParseCustomFunc )(std::vector< NvDsInferLayerInfo > const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, float classifierThreshold, std::vector< NvDsInferAttribute > &attrList, std::string &descString)
 Function definition for the custom classifier output parsing function. More...
 
typedef struct
_NvDsInferContextInitParams 
NvDsInferContextInitParams
 

Enumerations

enum  NvDsInferPluginFactoryType {
  PLUGIN_FACTORY,
  PLUGIN_FACTORY_EXT,
  PLUGIN_FACTORY_V2
}
 Specifies the type of the Plugin Factory. More...
 

Functions

bool NvDsInferPluginFactoryCaffeGet (NvDsInferPluginFactoryCaffe &pluginFactory, NvDsInferPluginFactoryType &type)
 Returns an instance of a newly allocated Plugin Factory interface to be used during parsing of caffe models. More...
 
void NvDsInferPluginFactoryCaffeDestroy (NvDsInferPluginFactoryCaffe &pluginFactory)
 Destroy the Plugin Factory instance returned in NvDsInferPluginFactoryCaffeGet More...
 
bool NvDsInferPluginFactoryUffGet (NvDsInferPluginFactoryUff &pluginFactory, NvDsInferPluginFactoryType &type)
 Returns an instance of a newly allocated Plugin Factory interface to be used during parsing of UFF models. More...
 
void NvDsInferPluginFactoryUffDestroy (NvDsInferPluginFactoryUff &pluginFactory)
 Destroy the Plugin Factory instance returned in NvDsInferPluginFactoryUffGet More...
 
bool NvDsInferPluginFactoryRuntimeGet (nvinfer1::IPluginFactory *&pluginFactory)
 Returns an instance of a newly allocated Plugin Factory interface to be used during parsing deserialization of cuda engines. More...
 
void NvDsInferPluginFactoryRuntimeDestroy (nvinfer1::IPluginFactory *pluginFactory)
 Destroy the PluginFactory instance returned in NvDsInferPluginFactoryRuntimeGet More...
 
bool NvDsInferInitializeInputLayers (std::vector< NvDsInferLayerInfo > const &inputLayersInfo, NvDsInferNetworkInfo const &networkInfo, unsigned int maxBatchSize)
 Initialize the input layers for inference. More...
 
bool NvDsInferCudaEngineGet (nvinfer1::IBuilder *builder, NvDsInferContextInitParams *initParams, nvinfer1::DataType dataType, nvinfer1::ICudaEngine *&cudaEngine)
 Build and return CudaEngine for custom models. More...