1 # Copyright (c) 2019-2020 NVIDIA CORPORATION. All rights reserved.
3 @page dnn_mainsection DNN
5 @note SW Release Applicability: This module is available in both **NVIDIA DriveWorks** and **NVIDIA DRIVE Software** releases.
9 The DNN module implements functionality to run inference using deep neural networks, which were generated with an NVIDIA® TensorRT™ optimization tool.
11 ### Initialization with TensorRT
13 There are two ways of initializing DNN module with TensorRT.
15 - Use the following function to provide the path to a serialized TensorRT model file generated with TensorRT_optimization tool:
18 dwStatus dwDNN_initializeTensorRTFromFile(
19 dwDNNHandle_t *network,
20 const char *modelFilename,
21 const dwDNNPluginConfiguration *pluginConfiguration,
22 dwContextHandle_t context);
25 - Use the following function to provide a pointer to the memory block where the serialized TensorRT model is stored.
28 dwStatus dwDNN_initializeTensorRTFromMemory(
29 dwDNNHandle_t *network,
30 dwContextHandle_t context,
31 const char *modelContent,
32 uint32_t modelContentSize,
33 const dwDNNPluginConfiguration *pluginConfiguration,
34 dwContextHandle_t context);
37 With TensorRT networks, it is possible to have custom layers. These custom layers in DriveWorks require a certain set of functions
38 to be defined in order to be loaded and executed.
40 The definition of these functions must be provided in the form of a shared library. For more information on the function to be implemented, please see `dw/dnn/plugin/DNNPlugin.h`. For an example of plugins, please see `sample_dnn_plugin`.
44 dwDNN module offers two functions for running inference.
46 DNN models usually have one input and one output. For these kinds of models, the following function can be used for simplicity:
49 dwStatus dwDNN_inferSIO(float32_t *d_output, float32_t *d_input, dwDNNHandle_t network);
52 This function expects a pointer to linear device memory where the output of inference is stored, a pointer to linear device memory where the input to DNN is stored and the corresponding dwDNN handle which contains the network to run. Please note that output must be pre-allocated with the correct dimensions based on the neural network model.
54 Input to DNN is expected to have NxCxHxW layout, where N stands for batches, C for channels, H for height and W for width.
56 Moreover, dwDNN module provides a more generic function, with which it is possible to run networks with multiple inputs and/or multiple outputs:
59 dwStatus dwDNN_infer(float32_t **d_output, float32_t **d_input, dwDNNHandle_t network);
62 This function expects an array of pointers to linear device memory blocks where the outputs of inference is stored, an array of pointers where the inputs of inference are stored and the corresponding dwDNN handle which contains the network to run.
64 In order to be sure that the inputs and outputs are given in the correct order, it is recommended to place the input and output data in their corresponding arrays at the indices based on the names of the blobs as defined in network description. The following functions return these indices:
67 dwStatus dwDNN_getInputIndex(uint32_t *blobIndex,
69 dwDNNHandle_t network);
70 dwStatus dwDNN_getOutputIndex(uint32_t *blobIndex,
72 dwDNNHandle_t network);
75 Furthermore, the following functions return the number of required inputs and outputs:
78 dwStatus dwDNN_getInputBlobCount(uint32_t *count, dwDNNHandle_t network);
79 dwStatus dwDNN_getOutputBlobCount(uint32_t *count, dwDNNHandle_t network);
82 In addition, dimensions of inputs and outputs are available via:
85 dwStatus dwDNN_getInputSize(dwBlobSize *blobSize,
87 dwDNNHandle_t network);
88 dwStatus dwDNN_getOutputSize(dwBlobSize *blobSize,
90 dwDNNHandle_t network);
93 Inference is performed in parallel with the host, making it possible to do useful work while the DNN results are being calculated. The caller must wait for the inference to finish before reading the results.
95 By default the inference job is launched on the default CUDA stream. The simplest way to wait for the inference to finish is thus to call `cudaDeviceSynchronize()`, which waits for all pending CUDA computations to finish. For more fine-grained control the user can create a cudaStream_t using the CUDA Runtime API and pass it to the DNN with:
98 dwStatus dwDNN_setCUDAStream(cudaStream_t stream, dwDNNHandle_t network);
101 After the CUDA stream is assigned to the DNN all following infer() operations are performed on the given CUDA stream. The user can then use CUDA Runtime API methods such as
104 cudaError_t cudaStreamSynchronize ( cudaStream_t stream );
110 cudaError_t cudaStreamWaitEvent ( cudaStream_t stream, cudaEvent_t event, unsigned int flags );
113 to wait for the inference results. For more information about CUDA streams refer to the CUDA Runtime documentation.
117 Each DNN usually requires a specific pre-processing configuration, and it might, therefore, be necessary to include this information together with the DNN.
119 DNN Metadata contains pre-processing information relevant to the loaded network. This is not a requirement, but can be provided by the user together with the network by placing a certain json file in the same folder as the network with an additional “.json” extension.
121 For example, if the network is in path “/home/dwUser/dwApp/data/myDetector.dnn”, DNN module will look for “/home/dwUser/dwApp/data/myDetector.dnn.json” to load DNN Metadata from.
123 The json file must have the following format:
127 "dataConditionerParams" : {
128 "meanValue" : [0.0, 0.0, 0.0],
129 "splitPlanes" : true,
130 "pixelScaleCoefficient": 1.0,
131 "ignoreAspectRatio" : false,
132 "doPerPlaneMeanNormalization" : false
134 "tonemapType" : "none",
135 "__comment": "tonemapType can be one of {none, agtm}"
139 If the json file in question is not present in the same folder as the network, DNN Metadata is filled with default values. The default parameters would look like this:
143 "dataConditionerParams" : {
144 "meanValue" : [0.0, 0.0, 0.0],
145 "splitPlanes" : true,
146 "pixelScaleCoefficient": 1.0,
147 "ignoreAspectRatio" : false,
148 "doPerPlaneMeanNormalization" : false
150 "tonemapType" : "none",
151 "__comment": "tonemapType can be one of {none, agtm}"
155 Note that whether DNN Metadata is used is a decision in the application level. The Metadata can be acquired by calling:
158 dwStatus dwDNN_getMetaData(dwDNNMetaData *metaData, dwDNNHandle_t network);
161 ## Relevant Tutorials