|
|
DriveWorks SDK Reference| 0.6.67 Release |
Drive-able collision-free space, i.e. the space that can be immediately reached without collision, provides critical information for the navigation in autonomous driving. This free-space example demonstrates the NVIDIA end-to-end technique of detecting the collision-free space in the road scenario. The problem is modeled within a deep neural network (FreeSpaceNet), with the input being a three-channel RCB image and the output being a boundary across the image from the left to the right. The boundary separates the obstacle from open road space. In parallel, each pixel on the boundary is associated with one of the four semantic labels:
This FreeSpaceNet sample has been trained using RCB images with moderate augmentation.
This sample streams a H.264 or RAW video and computes the free space boundary on each frame. The sample can also be operated with cameras.
The image datasets used to train FreeSpaceNet have been captured using a View Sekonix Camera Module (SS3323) with AR0231 RCCB sensor. The camera is mounted high up at the rear-view mirror position. Demo videos are captured at 2.3 MP.
To achieve the best free-space detection performance, NVIDIA recommends to adopt a similar camera setup and align the video center vertically with the horizon before recording new videos.
The sample H264 video and camera calibration files are located at:
sdk/data/samples/freespace/
The latency of the sample FreeSpaceNet model:
The command lines for running the sample on Linux:
./sample_freespace_detection --video=<video file.h264> --rig=<calibration file.xml>
or
./sample_freespace_detection --video=<video file.raw> --rig=<calibration file.xml>
The command line for running the sample on NVIDIA DRIVE PX 2 with cameras:
./sample_freespace_detection --input-type=camera --camera-type=<camera_type> --csi-port=<csi_port> --rig=<calibration file.xml>
The free-space detection sample:
The colors of the ploylines represent the types of obstacle the boundary interface with: