DriveWorks SDK Reference
3.5.78 Release
For Test and Development only

Camera Localization
SW Release Applicability: This tutorial is applicable to modules in NVIDIA DRIVE Software releases.


The workflow of localization is as follows:

  • Initialize the Localization module and its dependencies, including the Egomotion module.
  • While precise position is needed:
    • Update the Egomotion module as raw GPS/IMU/CAN measurements are available.
    • Obtain and prepare sensor data:
      • Perform camera based detections (lane detection, pole detection, sign detection, etc.).
      • Obtain an approximate rig position and orientation in absolute WGS84 and ENU coordinates using the Egomotion module.
      • Obtain a relative rig pose transformation between the previous timestamp of localization invocation to the current camera timestamp using the Egomotion module.
    • Invoke the localization algorithm and obtain localized results.
  • When done, release the acquired resources.

Below is the general structure for usage, for the full implementation refer to HD Maps Localization Sample.


Before the Localization module can be used, all prerequisite modules need to be initialized. Prerequisite modules include Maps, Rig Configuration, Landmark Perception, and Egomotion.

The following snippet initializes the Maps.

dwMaps_initialize(&m_map, mapFilePath, m_context);

The snippet below initializes the Rig Configuration module.

dwRig_initializeFromFile(&m_rigConfig, m_context, rigFile);

The next snippet initializes the Landmark Perception module.

dwMaps_initialize(&m_map, mapFilePath, m_context);
dwMapNetParams mapNetParams{};
dwMapNet_initialize(&m_mapNet, &mapNetParams, m_context);
dwLandmarkDetectorParams landmarkParams;
dwLandmarkDetector_initializeDefaultParams(&landmarkParams, m_cameraWidth, m_cameraHeight));
dwLandmarkDetector_initializeFromMapNet(&m_landmarkDetector, m_mapNet, landmarkParams, m_cameraWidth, m_cameraHeight, m_context);

The Landmark Perception module detects lane lines, road boundaries, and roadside poles. For users who wish to include road sign and traffic light detections, the Object Perception modules should also be initialized. This module can be initialized in a similar manner to the Landmark Perception module; a detailed description is provided in Object Detector Workflow.

The snippet below initializes the Egomotion module with example parameters. Both the relative and global (GNSS dependent) variants are required.

// relative egomotion parameters
dwEgomotionParameters relativeEgomotionParams = {};
dwEgomotion_initParamsFromRig(&relativeEgomotionParams, m_rigConfig, m_imuSensorName, nullptr, m_canSensorName);
relativeEgomotionParams.motionModel = DW_EGOMOTION_IMU_ODOMETRY;
relativeEgomotionParams.automaticUpdate = true; // update automatically
relativeEgomotionParams.historySize = 1000;
dwEgomotion_initialize(&m_egomotion, &relativeEgomotionParams, m_context);
// global egomotion parameters
dwGlobalEgomotionParameters globalEgomotionParams = {};
dwGlobalEgomotion_initParamsFromRig(&globalEgomotionParams, m_rigConfig, m_gpsSensorName);
dwGlobalEgomotion_initialize(&m_globalEgomotion, &globalEgomotionParams, m_context);

Once all dependencies are set up, the Localization module can be initialized.

dwLocalizationParameters locParams{};
dwLocalization_initParamsFromRig(&m_params, m_rigConfig,
m_numCameras, m_cameraIndices);
dwLocalization_initialize(&m_localizer, m_map, &locParams, m_context);

Once the Localization module is initialized, the HD map can be changed.

dwLocalization_setMap(m_map, m_localizer);

Data Reading and Preparation

Localization should be called on a per camera-frame basis. Each call requires visual features, a WGS84 position, an ENU orientation, and a relative transformation from the previous camera frame timestep to the current camera frame timestep. Regarding visual features, lane and road boundaries are required, while roadside poles, traffic signs, and traffic lights are optional inputs that improve localization accuracy. Position and orientation inputs need to be measured at or interpolated to the exact timestamp of the camera frame; the Egomotion module offers this functionality.

The snippet below shows how to detect lanes, road boundaries, and poles using the Landmark Perception module. Although not shown here, traffic signs and traffic lights can be detected using the Object Perception module.

dwRoadmarkDetection roadmarks{};
dwLandmarkDetector_detectLandmarks(&lanes, &poles, &roadmarks, cameraFrame, m_landmarkDetector)

The Egomotion modules have the ability to track global and relative position and orientation. Every time new GPS, IMU or CAN data is available, the modules should be updated. The snippet below illustrates the steps needed to update the Egomotion objects with new measurements. Refer to Relative Egomotion Workflow and Global Egomotion Workflow for more details.

// CAN and IMU data required for relative egomotion
if (hasCANMessage())
... // Parse the CAN message and get vehicle state
dwEgomotion_addVehicleState(&currVehicleState, m_egomotion)
if (hasIMUMeasurement())
dwEgomotion_addIMUMeasurement(&imuFrame, m_egomotion);
// GPS data and relative egomotion required for global egomotion
if (hasGPSMeasurement())
dwGlobalEgomotion_addGPSMeasurement(&gpsFrame, m_globalEgomotion);
// Update global egomotion module with latest relative egomotion state
dwEgomotionResult state = {};
dwEgomotionUncertainty uncertainty = {};
if (dwEgomotion_getEstimation(&state, m_egomotion) == DW_SUCCESS &&
dwEgomotion_getUncertainty(&uncertainty, m_egomotion) == DW_SUCCESS)
dwGlobalEgomotion_addRelativeMotion(&state, &uncertainty, m_globalEgomotion);

When a new camera frame is captured, the user queries the global Egomotion object for the vehicle's WGS-84 position and ENU orientation. Additionally, the user queries the relative egomotion object for the relative motion. When querying any of the Egomotion modules, it is important to ensure that the reported estimates are valid. Depending on the internal state of an egomotion object, estimates may not be available. The Localization module assumes all inputs are valid, and therefore, invalid measurements should not be passed in.

The snippet below highlights the steps necessary for obtaining correct measurements from the Egomotion modules:

// Obtain relative transform from egomotion
dwTransformation3f currToPrev = {};
dwStatus relativeEgoStatus = dwEgomotion_computeRelativeTransformation(&currToPrev, nullptr,
cameraTimestamp, m_prevCameraTimestamp,
// Obtain global orientation and position from egomotion
dwGlobalEgomotionResult globalEgoResult = {};
dwGlobalEgomotionUncertainty globalEgoUncertainty = {};
dwStatus globalEgoStatus = dwGlobalEgomotion_computeEstimate(&globalEgoResult, &globalEgoUncertainty,
cameraTimestamp, m_globalEgomotion);
bool relativeEgomotionValid = (relativeEgoStatus == DW_SUCCESS);
bool globalEgomotionValid = (globalEgoStatus == DW_SUCCESS) && globalEgoResult.validPosition && globalEgoResult.validOrientation;
In cases where the estimates from Egomotion are invalid or unavailable, it is best to clean the internal state of the Localization module by calling dwLocalization_reset() and resume localization only when valid global and relative estimates are available.


Once the input data is ready, the localization algorithm can be invoked. As a localization module instance maintains an internal state of previous positions and detections, the module should be repeatedly invoked over a sequence of data frames.

&globalEgoResult.position, &globalEgoResult.orientation,
&globalEgoUncertainty.position.covariance, &globalEgoUncertainty.orientation.covariance,

If a new map needs to be set, the setMap function can be invoked. This function assumes all road segment ID's of the new map match those of the old map.

dwLocalization_setMap(newMap, m_localizer);
The setMap function does not delete the old map. To avoid memory leaks, users should delete the old map if they do not plan to use it again.


Upon termination, all allocated resources must be freed:


This workflow is demonstrated in the following sample: HD Maps Localization Sample