- Note
- SW Release Applicability: This module is available in NVIDIA DRIVE Software releases.
About This Module
This module provides the APIs to initialize, query, and release NVIDIA proprietary collision-free road space detection network: OpenRoadNet. DriveWorks comes with two OpenRoadNet models, one trained on front camera and one on side camera. OpenRoadNet detects freespace area that the ego car can reach without any collision. It also detects the type of object (car, person, etc.) on the freespace area boundary.
Inputs
Both OpenRoadNet front and surround models consume FP16 planar frames, with a resolution of 480x272 pixels from AR0231 cameras (revision >= 4).
- Note
- Resizing the input frame is internally handled as part of the normal workflow.
OpenRoadNet front model is trained to support the following camera configurations:
- Front camera with a 120° field of view.
- Front camera with a 60° field of view.
- Rear camera with a 60° field of view.
OpenRoadNet side model is trained to support the following camera configurations:
- Rear right camera with 120° field of view.
- Rear left camera with 120° field of view.
Outputs
OpenRoadNet detects the following categories of obstacles on the boundary:
- Vehicles.
- Pedestrians.
- Curbs.
- Undefined boundaries.
- Other objects.
- Note
- All bicycles, cars, trucks, motorcycles are classified as vehicles.
Free space detector implemented using OpenRoadNet
- Note
- For more information on running inference using OpenRoadNet, please refer to Freespace Perception.
Additional Information
- Warning
- OpenRoadNet DNN limitations:
- It is optimized for daytime, clear-weather data. As a result, its accuracy is limited in dark or rainy conditions.
- It is also primarily trained on data collected in the United States. The accuracy of the model could be affected if used in other locales.
- The model is also trained to detect freespace boundaries on paved roads. It will not perform well on unpaved areas.
Relevant Tutorials
APIs