DriveWorks SDK Reference
3.5.78 Release
For Test and Development only

perception/path/camera/docs/usecase1_pilotnetdetector.md
Go to the documentation of this file.
1 # Copyright (c) 2019-2020 NVIDIA CORPORATION. All rights reserved.
2 
3 @page pathperception_usecase1_pilotnetdetector PilotNet Workflow
4 
5 @note SW Release Applicability: This tutorial is applicable to modules in **NVIDIA DRIVE Software** releases.
6 
7 #### Initialization
8 The PilotNetDetector parameters are initialized based on the dwPilotNetHandle_t using the following function call.
9 ```{.cpp}
10  dwStatus dwPilotNetDetector_initParams(dwPilotNetDetectorParams* params,
11  dwPilotNetHandle_t pilotnetDNN);
12 ```
13 
14 This call fills the params.numSensors and params.sensors[x].camera (where x is from 0 to params.numSensors) of the ::dwPilotNetDetectorParams struct. The rest need to be filled by the application The sensor parameters that need to be populated in the ::dwPilotNetDetectorParams include the sensor type, input CUDA stream, the designation of camera frame width and height, number of camera frames per second, camera model and camera extrensics.
15 ```{.cpp}
16  dwRigHandle rigHandle
17  dwImageProperties cameraImageProperties;
18  dwCameraProperties cameraProperties;
19  for (uint32_t i = 0; i < params.numSensors; i++)
20  {
21  uint32_t sensorId;
22  dwRig_findSensorByName(&sensorId, "camera::front::center::60fov", rigHandle);
23  params.sensors[i].sensorId = sensorId;
24 
25  params.sensors[i].stream = cudaStream;
26 
27  dwRect inputDimensions{};
28  inputDimensions.width = cameraImageProperties.width;
29  inputDimensions.height = cameraImageProperties.height;
30  params.sensors[i].inDimensions = inputDimensions;
31 
32  params.sensors[i].fps = cameraProperties.framerate;
33 
34  dwCameraModelHandle_t cameraModel;
35  dwCameraModel_initialize(&cameraModel, sensorId, rigHandle);
36  params.sensors[i].cameraModel = cameraModel;
37 
38  dwTransformation3f transformation;
39  dwRig_getSensorToRigTransformation(&transformation, sensorId, rigHandle);
40  params.sensors[i].cameraExtrinsics = &transformation;
41  }
42 ```
43 
44 After populating all sensor information of each sensor in the params.sensor struct, the @ref pathperception_mainsection_pilotnetdetector module is initialized with the following function call:
45 ```{.cpp}
46  dwStatus dwPilotNetDetector_initialize(dwPilotNetDetectorHandle_t* pilotnet,
47  const dwPilotNetDetectorParams* params,
48  dwPilotNetHandle_t pilotnetDNN,
49  dwContextHandle_t ctx);
50 ```
51 
52 Finally a struct must be allocated and binded with the @ref pathperception_mainsection_pilotnetdetector module to store the output of the module with the function call
53 
54 ```{.cpp}
55  dwStatus dwPilotNetDetector_bindOutput(dwPilotNetDetectorOutput* output,
56  dwPilotNetDetectorHandle_t pilotnet);
57 ```
58 
59 To get all the supported driving modes of the current model, use
60 
61 ```{.cpp}
62  dwStatus dwPilotNetDetector_getAvailableDrivingModes(bool* modes,
63  dwPilotNetDetectorHandle_t pilotnet);
64 ```
65 
66 #### Process
67 
68 After you successfully initialize the @ref pathperception_mainsection_pilotnetdetector module, you must prepare two struct before inference:
69 
70 - ::dwPilotNetDetectorState that describes status signals to the network. The status signals represent any non-image inputs such as lane change length, current speed, etc. The dwPilotNetDetector_setState() is used to pass this information to the module.
71 - A RAW camera frame that has been passed through the Software ISP pipeline to produce a quarter resolution (½ height x ½ width) de-bayered image, or an H.264 encoded YUV420 image. This image is passed using the dwPilotNetDetector_setCameraFrame() function. Note that this function must be called for every sesnsorId in ::dwPilotNetDetectorParams.
72 ```{.cpp}
73  for (uint32_t i = 0; i < params.numSensors; i++)
74  {
75  dwStatus dwPilotNetDetector_setCameraFrame(dwImageCudaFrame, params.sensors[i].sensorId, pilotnetHandle);
76  }
77 ```
78 
79 Once all the structs are set, call this function to perform pre processing,
80 
81 ```{.cpp}
82  dwStatus dwPilotNetDetector_processFrames(dwPilotNetDetectorHandle_t pilotnet);
83 ```
84 If insufficient images are passed using dwPilotNetDetector_setCameraFrame() before the call to dwPilotNetDetector_processFrames(), an error is thrown.
85 
86 The region-of-interest extraction from the initial frame is performed internally by the @ref pathperception_mainsection_pilotnetdetector module as part of the dwPilotNetDetector_processFrames() function. The dimensions and position of the ROI can be queried from the module using dwPilotNetDetector_getROIParams() function. The ROI is in turn down-sampled to a patch of fixed dimension that is finally fed into the network. The patch for each input frame can be extracted from the module using the dwPilotNetDetector_getPatchU8() function.
87 
88 Then call this function to runs the inference:
89 
90 ```{.cpp}
91  dwStatus dwPilotNetDetector_infer(dwPilotNetDetectorHandle_t pilotnet);
92 ```
93 
94 Finally perform post-processing on the inference output with
95 ```{.cpp}
96  dwStatus dwPilotNetDetector_processOutput(dwPilotNetDetectorHandle_t pilotnet);
97 ```
98 This is a blocking call, and the output is ready in the memory bound by the dwPilotNetDetector_bindOutput() function call.
99 
100 To retrieve the predicted trajectory in image coordinates of a particular camera (denoted by sensorId) for rendering purposes, the following function is called:
101 
102 ```{.cpp}
103  dwStatus dwPilotNetDetector_convertPointsRigToSensor(dwVector2f* points,
104  uint16_t numPoints,
105  dwVector3f* trajPoints,
106  uint16_t numTrajPoints,
107  uint32_t sensorId,
108  dwPilotNetDetectorHandle_t pilotnet);
109 ```
110 
111 where trajPoints points to the trajectory of the desired driving mode index in the ::dwPilotNetDetectorOutput struct, e.g. output.trajectory[DW_PILOTNET_LANE_STABLE].
112 
113 A visualization mask containing the overlay image of the network activations for the most recent inference can be obtained with the following function call:
114 
115 ```{.cpp}
116  dwStatus dwPilotNetDetector_getVisMaskU8(const dwImageCUDA** dwImg,
117  uint32_t* numROI,
118  uint32_t sensorId,
119  dwPilotNetDetectorHandle_t pilotnet);
120 ```
121 
122 #### Reset and Release
123 
124 To update/reset the extrensic calibration of a particluar sensor, use the following function call
125 
126 ```{.cpp}
127  dwStatus dwPilotNetDetector_setSensorExtrinsics(const dwTransformation3f* tx,
128  const uint32_t sensorId,
129  dwPilotNetDetectorHandle_t pilotnet);
130 ```
131 
132 If you need to reset the @ref pathperception_mainsection_pilotnetdetector module, e.g. to initialize it again with the same PilotNet model, the following function call can be used:
133 
134 ```{.cpp}
135  dwStatus dwPilotNetDetector_reset(dwPilotNetDetectorHandle_t pilotnet);
136 ```
137 
138 Finally, the @ref pathperception_mainsection_pilotnetdetector module is released with the function call:
139 
140 ```{.cpp}
141  dwStatus dwPilotNetDetector_release(dwPilotNetDetectorHandle_t *pilotnet);
142 ```
143 
144 #### Sample Application
145 
146 The PilotNet sample application @ref dwx_pilotnet_sample shows how to use the API calls mentioned above. It is designed to be a starting point for integrating the NVIDIA @ref pathperception_mainsection_pilotnetdetector module into more complex systems.