DriveWorks SDK Reference
3.0.4260 Release
For Test and Development only

Freespace Detection Sample (OpenRoadNet)
Note
SW Release Applicability: This sample is available in NVIDIA DRIVE Software releases.

Description

Drive-able collision-free space, i.e. the space that can be immediately reached without collision, provides critical information for the navigation in autonomous driving. This free-space example demonstrates the NVIDIA end-to-end technique of detecting the collision-free space in the road scenario. The problem is modeled within a deep neural network (OpenRoadNet), with the input being a three-channel RCB image and the output being a boundary across the image from the left to the right. The boundary separates the obstacle from open road space. In parallel, each pixel on the boundary is associated with one of the four semantic labels:

  • vehicle
  • pedestrian
  • curb
  • other

This Freespace Perception sample has been trained using RCB images with moderate augmentation.

This sample consumes an H.264 or RAW video and computes the free space boundary on each frame. The sample can also consume video from cameras.

Sensor Details

The image datasets used to train OpenRoadNet have been captured using a View Sekonix Camera Module (SS3324, SS3325) with AR0231 RCCB sensor. The camera is mounted high up at the rear-view mirror position. Demo videos are captured at 2.3 MP.

To achieve the best free-space detection performance, NVIDIA recommends to adopt a similar camera setup and align the video center vertically with the horizon before recording new videos.

Running the Sample

The freespace detection sample, sample_freespace_detection, accepts the following optional parameters. If none are specified, the sample performs detections on four supplied pre-recorded video.

./sample_freespace_detection --input-type=[video|camera]
                             --rig=[path/to/rig/file]
                             --video=[path/to/video]
                             --camera-type=[camera]
                             --camera-group=[a|b|c|d]
                             --camera-index=[0|1|2|3]
                             --slave=[0|1]
                             --maxDistance=[fp_number]

Where:

--input-type=[video|camera]
        Defines if the input is from live camera or from a recorded video.
        Live camera is only supported on On NVIDIA DRIVE platform.
        Default value: video

--rig=[path/to/rig/file]
        Points to the rig file containing camera properties.
        Default value: path/to/data/samples/freespace/rig.json.

--video=[path/to/video]
        Is the absolute or relative path of a raw or h264 recording.
        Only applicable if --input-type=video
        Default value: path/to/data/samples/freespace/video_freespace.h264.

--camera-type=[camera]
        Only applicable if --input-type=camera.
        Default value: ar0231-rccb-bae-sf3324

--camera-group=[a|b|c|d]
        Is the group where the camera is connected to.
        Only applicable if --input-type=camera.
        Default value: a

--camera-index=[0|1|2|3]
        Indicates the camera index on the given port.
        Default value: 0

--slave=[0|1]
        Setting this parameter to 1 when running the sample on Xavier B allows to access a camera that
        is being used on Xavier A. Only applicable if --input-type=camera.
        Default value: 0

--maxDistance=[fp_number]
        Defines the maximum distance in meters at which free space boundary distance can be distinguished.
        Default value: 50.0

Examples

To run the sample on the Linux host (x86)

./sample_freespace_detection --video=<video file.h264> --rig=<calibration file.json>

or

./sample_freespace_detection --video=<video file.raw> --rig=<calibration file.json>

To run sample on NVIDIA DRIVE platforms with cameras

./sample_freespace_detection --input-type=camera --camera-type=<camera_type> --camera-group=<camera_group> --rig=<calibration file.json>

where <camera type> is a supported RCCB sensor. See List of cameras supported out of the box for the list of supported cameras for each platform.

Note
The free-space detection sample directly resizes video frames to the network input resolution. Therefore, to get the best performance, it is suggested to use videos with similar aspect ratio to the demo video. Or you can set the Region of Interest (ROI) to perform inference on a sub-window of the full frame.

Output

The free-space detection sample:

  • Creates a window.
  • Displays a video.
  • Overlays ploylines for the detected free-space boundary points.
  • Computes boundary points in car coorindate system, if a valid camera calibration file is provided.

The colors of the ploylines represent the types of obstacle the boundary interface with:

  • Red: Vehicle
  • Green: Curb
  • Blue: Pedestrian
  • Yellow: Other
sample_freespace_detection.png
Free-Space Detection Sample

Additional Information

For more information, see Freespace Perception.