The workflow of localization is as follows:
Below is the general structure for usage, for the full implementation refer to HD Maps Localization Sample.
Before the Localization module can be used, all prerequisite modules need to be initialized. Prerequisite modules include Maps, Rig Configuration, Landmark Perception, and Egomotion.
The following snippet initializes the Maps.
The snippet below initializes the Rig Configuration module.
The next snippet initializes the Landmark Perception module.
The Landmark Perception module detects lane lines, road boundaries, and roadside poles. For users who wish to include road sign and traffic light detections, the Object Perception modules should also be initialized. This module can be initialized in a similar manner to the Landmark Perception module; a detailed description is provided in Object Detector Workflow.
The snippet below initializes the Egomotion module with example parameters. Both the relative and global (GNSS dependent) variants are required.
Once all dependencies are set up, the Localization module can be initialized.
Once the Localization module is initialized, the HD map can be changed.
Localization should be called on a per camera-frame basis. Each call requires visual features, a WGS84 position, an ENU orientation, and a relative transformation from the previous camera frame timestep to the current camera frame timestep. Regarding visual features, lane and road boundaries are required, while roadside poles, traffic signs, and traffic lights are optional inputs that improve localization accuracy. Position and orientation inputs need to be measured at or interpolated to the exact timestamp of the camera frame; the Egomotion module offers this functionality.
The snippet below shows how to detect lanes, road boundaries, and poles using the Landmark Perception module. Although not shown here, traffic signs and traffic lights can be detected using the Object Perception module.
The Egomotion modules have the ability to track global and relative position and orientation. Every time new GPS, IMU or CAN data is available, the modules should be updated. The snippet below illustrates the steps needed to update the Egomotion objects with new measurements. Refer to Relative Egomotion Workflow and Global Egomotion Workflow for more details.
When a new camera frame is captured, the user queries the global Egomotion object for the vehicle's WGS-84 position and ENU orientation. Additionally, the user queries the relative egomotion object for the relative motion. When querying any of the Egomotion modules, it is important to ensure that the reported estimates are valid. Depending on the internal state of an egomotion object, estimates may not be available. The Localization module assumes all inputs are valid, and therefore, invalid measurements should not be passed in.
The snippet below highlights the steps necessary for obtaining correct measurements from the Egomotion modules:
Once the input data is ready, the localization algorithm can be invoked. As a localization module instance maintains an internal state of previous positions and detections, the module should be repeatedly invoked over a sequence of data frames.
If a new map needs to be set, the setMap function can be invoked. This function assumes all road segment ID's of the new map match those of the old map.
Upon termination, all allocated resources must be freed:
This workflow is demonstrated in the following sample: HD Maps Localization Sample