1 # Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3 @page dwx_object_dwdetector Basic Object Detector Sample using dwDetector
5 @note The `dwDetector` module is a simple, low resolution, single-class sample that uses GoogLeNet
6 architecture to show how to integrate a deep neural network (DNN) into DriveWorks to perform object detection.
7 This sample is trained on a small amount of object detection data. For a more sophisticated,
8 higher resolution, multi-class sample, see the [DriveNet Sample](@ref dwx_object_tracker_drivenet_sample).
10 This sample demonstrates the usage of the `dwDetector` module for detecting and
11 tracking 2D objects. The sample has 2 modes. If the video-count is set to 1,
12 it expects a single video stream, and for each frame it provides
13 2 regions of interest, central and zoomed central, to the dwDetector
14 module. The objects are fused by dwDetector.
16 If the video-count is set to 2,
17 it expects 2 video streams where objects for each video are treated separately
18 by the dwDetector module. In either case, dwDetector module runs data preparation,
19 inference, and clustering to get object list. Furthermore, it tracks objects from
20 previous frames to the current frame. Afterwards, the detected objects and tracked
21 objects coming from previous frames are merged together to get the final list of
22 objects for that frame.
24 
26 #### Running the Sample
29 ./sample_object_dwdetector
31 The sample must be an H.264 video stream.
33 The sample usage using TensorRT network is:
35 ./sample_object_dwdetector --video1=video_file.h264 --tensorRT_model=TensorRT_model_file
36 --tracker=config_file.txt
38 Note that this sample loads its DataConditioner parameters from DNN metadata. This metadata
39 can be provided to DNN module by placing the json file in the same directory as the model file
40 with json extension; i.e. TensorRT_model_file.json.
41 Please see data/samples/detector/tensorRT_model.bin.json as an example of DNN metadata file.
43 If a single video is desired, the sample usage is:
45 ./sample_object_dwdetector --video-count=1 --video1=video_file.h264
47 If 2 videos are desired, the sample usage is:
49 ./sample_object_dwdetector --video-count=2 --video1=video_file1.h264 --video2=video_file2.h264
51 The tracker configuration file includes the following parameters:
53 - `maxFeatureCount`: Max number of features to track. Set this value between 1000 and 8000.
54 - `iterationLK`: Optimization iteration to locate features. By default, it is set to 40.
55 - `windowSizeLK`: Search window for Lucas Kanade features. By default, it is set to 14.
56 - `detectInterval`: The frequency of calling the detection routine. Set this value between 1 and n.
57 - `maxFeatureCountPerObject`: Max number of features to track for each object. By default, it is set to 500.
58 - `maxObjectImageScale`: Max image scale of the object to track. Set this value between 0 and 1. The multiplication
59 of this parameter with the image size gives the maximum object size, in pixels.
60 - `minObjectImageScale`: Min image scale of the object to track. Set this value between 0 and 1. The multiplication
61 of this parameter with the image size gives the maximum object size, in pixels.
65 The sample creates a window, displays the video streams, and overlays the list
66 and detected/tracked bounding boxes of the objects.