1 # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 @page dwx_parknet_sample Parking Space Detection Sample
6 @section dwx_parknet_description Description
8 The NVIDIA<sup>®</sup> Parking space detection sample app is an example of using convolutional deep neural network,
9 called ParkNet, for visual perception of parking spaces.
10 Specifically, the ParkNet DNN allows detection and precise localization of available parking spaces
11 in image space and conversion of 2D coordinates from the images space
12 to 3D world coordinates of the input camera-rig.
13 The ParkNet app demonstrates perception by overlaying the lines of general-shape quadrilateral
14 that constitute an available parking space on top of the input video stream.
15 It also identifies and displays the entry line to the parking space location.
16 Such an entry line is one of the four lines of the detected quadrilateral.
17 The conversion of the 2-D coordinates of the corners of the quadrilateral into 3-D coordinates
18 of the camera-rig space is implemented implicitly within the app.
20 @warning The Parknet app does not demonstrate any control or path-planning functionality.
21 To see demonstration of control and path planning functionality,
22 please take a look at `Parking` sample app.
24 The ParkNet sample app has several input parameters defining its behavior.
25 The following section explains the sample app input parameters.
27 @section dwx_parknet_sample_running Running the Sample
29 The ParkNet sample, `sample_parking_perception`, accepts the following optional parameters.
30 If none are specified, it performs detections of parking spaces on a supplied pre-recorded video.
32 ./sample_parking_perception --input-type=[video|camera]
33 --camera-type=[camera]
34 --camera-group=[a|b|c|d]
35 --camera-index=[0|1|2|3]
36 --video=[path/to/video]
37 --stopFrame=[frame number]
42 --input-type=[video|camera]
43 Defines if the input is from a live camera or from a recorded video.
44 Live camera is supported only on NVIDIA DRIVE(tm) platforms.
45 It is not supported on Linux (x86 architecture) host systems.
48 --camera-type=[camera]
49 Specifies a supported AR0231 `RCCB` sensor.
50 Only applicable if --input-type=camera.
51 Default value: ar0231-rccb-bae-sf3324
53 --camera-group=[a|b|c|d]
54 Is the group where the camera is connected to.
55 Only applicable if --input-type=camera.
58 --camera-index=[0|1|2|3]
59 Indicates the camera index on the given port.
62 --video=[path/to/video]
63 Defines the path to the video for the app to process. If this option is not set, the default sample
65 Default value: data/samples/parking/sample.h264
68 Defines the ParkNet model to use for detection of parking spaces. There is currently only a single model available - DEFAULT.
69 Default value: DEFAULT
71 --stopFrame=[frame number]
72 Defines the end position of the video segment for the app. Only frames up to this frame_number
73 will be processed. The special setting of `0` enables to process all frames from
74 the sequence with unlimited looping.
78 Defines the setting of local tone mapping. The value of `1` enables the local tone-mapping,
82 @subsection dwx_parknet_sample_examples Examples
84 ### To run the sample on a default video in a loop
86 ./sample_parking_perception
88 ### To run the sample on a user-provided video (H264 or RAW) in a loop
90 ./sample_parking_perception --video=<video file.raw>
92 ### To run the ParkNet sample app on first 3000 frames of user-provided video
94 ./sample_parking_perception --video=<video file.raw> --stopFrame=3000
96 ### To run the sample on a live video from a camera
98 ./sample_parking_perception --input-type=camera --camera-type=<rccb camera type> --camera-group=<camera group> --camera-index=<camera idx on camera group>
100 where `<camera type>` is a supported `RCCB` sensor.
101 See @ref supported_sensors for the list of supported cameras for each platform.
103 ### To apply local tone mapping when processing video
105 ./sample_parking_perception --ltm=1
107 @section dwx_parknet_sample_output Output
109 In the app's output, the green overlay lines represent detected parking spaces.
110 The red line within the parking-lines boundary represents the entry line to the parking space.
111 Such a line corresponds to a line between two of the four corners representing the parking space.
113 The following image shows a typical output of the ParkNet sample app on the default video sequence.
114 
116 @section dwx_parknet_sample_limitations Limitations
118 @warning ParkNet DNN currently has limitations that could affect its performance:
120 - The ParkNet is trained for the parking situations occurring in the United States of America. Most of the training
121 data came from the US state of California and, thus, the performance is optimized for the layout of parking spaces
122 typical in California.
123 - The trained data was dominated by parking lots and parking garages. Thus, the ParkNet best performance is
124 observed on parking lots with most of the parking spaces available.
125 - The trained data was dominated by daytime, clear-weather data. As a result, the ParkNet DNN
126 does not perform well in dark or rainy conditions.