Application Customization

The NVIDIA® DeepStream SDK on NVIDIA® Tesla® or NVIDIA® Jetson platforms can be customized to support custom neural networks for object detection and classification.
You can create your own model. You must specify the applicable configuration parameters in the [property] group of the nvinfer configuration file (for example, config_infer_primary.txt).
The configuration parameters that you must specify include:
model-file (Caffe model)
proto-file (Caffe model)
uff-file (UFF models)
onnx-file (ONNX models)
model-engine-file, if already generated
int8-calib-file for INT8 mode
mean-file, if required
offsets, if required
maintain-aspect-ratio, if required
parse-bbox-func-name (detectors only)
parse-classifier-func-name (classifiers only)
custom-lib-path
output-blob-names (Caffe and UFF models)
network-type
model-color-format
process-mode
engine-create-func-name

Custom Model Implementation Interface

nvinfer supports interfaces for these purposes:
Custom bounding box parsing for custom neural network detectors and classifiers
IPlugin implementation for layers not natively supported by NVIDIA® TensorRT™
Initializing non-image input layers in cases where the network has more than one input layer
Creating a CUDA engine using TensorRT Layer APIs instead of model parsing APIs
IModelParser interface to parse the model and fill the layers in an INetworkDefintion
All the interface implementations for the models must go into a single independent shared library. nvinfer dynamically loads the library with dlopen(), looks for implemented interfaces with dlsym(), and calls the interfaces as required.
For more information about the interface, refer to the header file nvdsinfer_custom_impl.h.

Custom Output Parsing

For detectors, you must write a library that can parse the bounding box coordinates and the object class from the output layers. For classifiers, the library must parse the object attributes from the output layers. You can find example code and makefile in the source directory in sources/libs/nvdsinfer_customparser.
The generated library path and the function name must be specified with the configuration parameters as mentioned in the section Custom Model. The README file in sources/libs/nvdsinfer_customparser has an example of how to use this custom parser.

IPlugin Implementation

DeepStream supports networks containing layers not supported by TensorRT but supported through implementations of the IPlugin interface. The objectDetector_SSD, objectDetector_FasterRCNN, and objectDetector_YoloV3 sample applications show examples of IPlugin implementations.

Input Layer Initialization

DeepStream supports initializing non-image input layers for networks having more than one input layer. The layers are initialized only once before the first inference call. The objectDetector_FasterRCNN sample application shows an example of an implementation.

CUDA Engine Creation for Custom Models

DeepStream supports creating TensorRT CUDA engines for models which are not in Caffe, UFF, or ONNX format, or which must be created from TensorRT Layer APIs. The objectDetector_YoloV3 sample application shows an example of the implementation. When using a single custom library for multiple nvinfer plugin instances in a pipeline, each instance can have its own implementation of engine-create-func-name and this can be specified in the configuration file. An example would be back-to-back detector pipeline with different types of yolo models.

IModelParser Interface for Custom Model Parsing

This is an alternative to the “CUDA Engine Creation” interface for parsing and filling a TensorRT network (INetworkDefinition). The objectDetector_YoloV3 sample application shows an example of the implementation.