DriveWorks SDK Reference
3.0.4260 Release
For Test and Development only

MapNet
Note
SW Release Applicability: This module is available in NVIDIA DRIVE Software releases.

About This Module

This module provides the APIs to initialize, query, and release the NVIDIA proprietary landmark detection deep neural network: MapNet.
MapNet provides type information, lane marking detection, and landmark geometry detection.

The data structures include:

  • dwMapNetParams: defines the MapNet model variant with the specific precision and processor optimization for loading.
  • dwMapNetType: defines the type of MapNet model to run - segmentation, regressor or end-to-end. Segmentation is the default standard model.

Detection Methods

There are two methods of detection: segmentation-based detection, regressor-based detection, and end-to-end detection.

Segmentation-based detection detects pixel-wise classifications for landmark types (solid lane line, dashed lane line, etc.) on an image.
It is currently more stable compared to regressor-based detection.

Regressor-based detection regresses distance information for each pixel to its closest label pixel. It retains richer information on image geometry,
which allows better curved lane detection, and detection over longer ranges.

End-to-End detection assumes each lane or landmark as a cubic Bezier curve, and regresses to four control points. It is more stable compared with segmentation and regressor based detection.

Compared to segmentation-based detection, regressor-based detection detects an entire lane polygon including the center-line, left edge, and right edge for richer information.
For any extra detected classes, it also detects points around the edges for the road-marking contours, and provides its bounding box.

Inputs

MapNet consumes RCCB frames with a resolution of 960x480 pixels (end-to-end), 960x504 pixels (regressor-based) or 480x240 pixels (segmentation-based) from AR0231 cameras (revision >= 4).
Segmentation based, regressor based and end-to-end methods are trained to support front cameras with 60° and 120° FoV.

Outputs

MapNet detects a range of landmark types. The following classes are currently supported for segmentation models, regressor models and end-to-end models:

  • Solid Lane Markings.
  • Dashed Lane Markings.
  • Road Boundary Lane Markings.
  • General Vertical Poles.

The regressor model also support these extra classes:

  • Crosswalk Markings.
  • Intersection Markings.
  • Crossing Intersection Markings.
  • Road Text and Shape Markings.
  • Roadsign Vertical Pole Markings.

The end-to-end model also support these extra classes:

  • Intersection Markings.
  • Road Text and Shape Markings.
  • Roadsign Vertical Pole Markings.

MapNet outputs intermediate signals to feed the Landmark Perception pipeline which returns the following:

  1. dwLaneDetection: struct containing lane detections in the form of image and world space polylines.
  2. dwLandmarkDetection: struct landmark detections other than lanes - poles (all methods), crosswalks (regressor only), intersection markings (regressor and end-to-end), road text and shape markings (regressor and end-to-end) - in the form of image and world space polylines. In the case of road text / shape markings and crosswalks, polylines represent the boundary of polygons.
sample_landmark_perception.png
MapNet Outputs

Colors indicate the following:

  • Orange - Pole Detection
  • Light Blue - Adjacent Left Lane Boundary
  • Red - Current Driving Lane Left Boundary
  • Green - Current Driving Lane Right Boundary
  • Dark Blue - Adjacent Right Lane Boundary

Letters indicate the following:

  • S - Solid Lane Line Type
  • D - Dashed Lane Line Type
  • B - Road Boundary Line Type
  • P - Vertical Pole Line Type

Relevant Tutorials

APIs