DriveWorks SDK Reference
4.0.0 Release
For Test and Development only

samples/dnn/sample_object_detector_tracker/README.md
Go to the documentation of this file.
1 # Copyright (c) 2019-2020 NVIDIA CORPORATION. All rights reserved.
2 
3 @page dwx_object_detector_tracker_sample Basic Object Detector and Tracker Sample
4 @tableofcontents
5 
6 @section dwx_object_detector_tracker_description Description
7 
8 The Basic Object Detector and Tracker sample demonstrates how the @ref dnn_group can be used for
9 object detection and the 2D object tracking capabilities of the @ref boxtracker_group module.
10 
11 The sample streams a H.264 or RAW video and runs DNN inference on each frame to
12 detect objects using NVIDIA<sup>&reg;</sup> TensorRT<sup>&tm;</sup> model.
13 
14 The interpretation of the output of a network depends on the network design. In this sample,
15 2 output blobs (with `coverage` and `bboxes` as blob names) are interpreted as coverage and bounding boxes.
16 
17 For each frame, it detects the object locations and tracks the objects between video frames. Currently, the
18 object tracker resorts to image feature detection and tracking. The tracker uses feature motion to predict
19 the object location.
20 
21 @section dwx_object_detector_tracker_sample_running Running the Sample
22 
23 The Basic Object Detector and Tracker sample, `sample_object_detector_tracker`, accepts the following optional parameters. If none are specified, it performs detections on a supplied pre-recorded video.
24 
25  ./sample_object_detector_tracker --input-type=[video|camera]
26  --video=[path/to/video]
27  --camera-type=[camera]
28  --camera-group=[a|b|c|d]
29  --camera-index=[0|1|2|3]
30  --slave=[0|1]
31  --tensorRT_model=[path/to/TensorRT/model]
32 
33 Where:
34 
35  --input-type=[video|camera]
36  Defines if the input is from live camera or from a recorded video.
37  Live camera is supported only on NVIDIA DRIVE platforms.
38  Default value: video
39 
40  --video=[path/to/video]
41  Specifies the absolute or relative path of a raw or h264 recording.
42  Only applicable if --input-type=video
43  Default value: path/to/data/samples/sfm/triangulation/video_0.h264.
44 
45  --camera-type=[camera]
46  Specifies a supported AR0231 `RCCB` sensor.
47  Only applicable if --input-type=camera.
48  Default value: ar0231-rccb-bae-sf3324
49 
50  --camera-group=[a|b|c|d]
51  Specifies the group where the camera is connected to.
52  Only applicable if --input-type=camera.
53  Default value: a
54 
55  --camera-index=[0|1|2|3]
56  Specifies the camera index on the given port.
57  Default value: 0
58 
59  --slave=[0|1]
60  Setting this parameter to 1 when running the sample on Xavier B allows to access a camera that
61  is being used on Xavier A. Only applicable if --input-type=camera.
62  Default value: 0
63 
64  --tensorRT_model=[path/to/TensorRT/model]
65  Specifies the path to the NVIDIA<sup>&reg;</sup> TensorRT<sup>&trade;</sup>
66  model file.
67  The loaded network is expected to have a coverage output blob named "coverage" and a bounding box output blob named "bboxes".
68  Default value: path/to/data/samples/detector/<gpu-architecture>/tensorRT_model.bin, where <gpu-architecture> can be `pascal` or `volta-discrete` or `volta-integrated` or `turing`.
69 
70 @note This sample loads its DataConditioner parameters from DNN metadata JSON file.
71 To provide the DNN metadata to the DNN module, place the JSON file in the same
72 directory as the model file. An example of the DNN metadata file is:
73 
74  data/samples/detector/pascal/tensorRT_model.bin.json
75 
76 @subsection dwx_object_detector_tracker_sample_examples Examples
77 
78 #### Default usage
79 
80  ./sample_object_detector_tracker
81 
82 The video file must be a H.264 or RAW stream. Video containers such as MP4, AVI, MKV, etc. are not supported.
83 
84 #### To run the sample on a video on NVIDIA DRIVE or Linux platforms with a custom TensorRT network
85 
86  ./sample_object_detector_tracker --input-type=video --video=<video file.h264/raw> --tensorRT_model=<TensorRT model file>
87 
88 #### To run the sample on a camera on NVIDIA DRIVE platforms with a custom TensorRT network
89 
90  ./sample_object_detector_tracker --input-type=camera --camera-type=<rccb_camera_type> --camera-group=<camera group> --camera-index=<camera idx on camera group> --tensorRT_model=<TensorRT model file>
91 
92 where `<rccb_camera_type>` is a supported `RCCB` sensor.
93 See @ref supported_sensors for the list of supported cameras for each platform.
94 
95 @section dwx_object_detector_tracker_sample_output Output
96 
97 The sample creates a window, displays the video streams, and overlays the list
98 of features and detected/tracked bounding boxes of the objects with IDs.
99 
100 The color coding of the overlay is:
101 
102 - Red bounding boxes: Indicate successfully tracked bounding boxes.
103 - Red points: Indicate successfully tracked 2D features.
104 - Yellow bounding box: Identifies the region which is given as input to the DNN.
105 
106 ![Object tracker on a H.264 stream](sample_object_tracker.png)
107 
108 @section dwx_object_detector_tracker_sample_more Additional Information
109 
110 For more information, see:
111 - @ref dnn_mainsection
112 - @ref dataconditioner_mainsection
113 - @ref imageprocessing_features_mainsection