Localization

The Localization module provides location information of a robot in a provided map. This allows for accurate localization, which is essential for planning and navigation.

Currently, this module only supports 2D lidar-scan localization.

If one wants to use the whole package of localization, one can use subgraph file sdk/extensions/localization/graphs/localization.subgraph.yaml Simply set flatscan receiver (rx_flatscan) of the subgraph, and connect your localization result receiver to localization result transmitter (tx_localization_result).

The Global Localization module bootstraps the localization process without any initial guess pose. It uses a grid search algorithm, which samples the map based on the input grid size and exhaustively searches each grid and each angle within the grid. The GPU is used to parallelize the search for the samples, and ray tracing is used to evaluate each sample.

  • Inputs: Flatscan, 2D Occupancy Map

  • Output: Localization Result

To use the Global Localization module, use the subgraph file sdk/extensions/localization/graphs/global_localization.subgraph.yaml

Simply set the flatscan receiver (rx_flatscan) and reset signal receiver (rx_reset_signal) of the subgraph interface. And connect the localization result transmitter (tx_localization_result) to your own receiver. Whenever the reset signal receiver receives an entity, it resets the grid search localizer to its initial state so it can run global localization again.

The following are important parameters in the module:

  • num_beams_gpu (default: 512): The GPU accelerated scan-and-match function can only handle a certain number of beams per range scan. The allowed values are {32, 64, 128, 256, 512}. If the number of beams in the range scan does not match this number, a subset of beams will be taken. Since yaw is shifted based on the beam direction to reduce cost of ray tracing, this number also affects the output yaw resolution; a smaller number will result in a sparser or less accurate the yaw.

  • batch_size (default: 512): The number of concurrent pose samples sent as a batch to the GPU kernel. Choose a value that matches the GPU memory of the robot platform.

  • sample_distance (default: 0.1): The distance between sample points in meters. A smaller number will result in more sample poses being incorporated, leading to higher accuracy and lower performance.

  • patch_size (default: [512, 512] ): The size, in occupancy map rows/columns, of each patch used for tracing

  • max_output_error (default: 0.45 ): The max output error from our best sample, if output error larger than this threshold, we conclude localization failed

Localization Widget

The Localization Widget is an Isaac Sight module that can be used to reseed (bootstrap) localization.

Global localization can fail to correctly localize the robot when multiple similar environments exist in the map (e.g. office cubicles, meeting rooms). There is currently no automated solution to this issue, so you will need to provide user input as an initial guess to assist localization.

Follow these steps to perform manual global localization with the Localization Widget:

  1. Open Isaac Sight at http://localhost:3000/.

  2. Open the Localization widget.

localization_widget_sidebar.png


  1. Select which localizer to operate.

localization_widget.png


  1. Click the Stop Localizer button to stop the robot from moving.

localization_stop_localization.png


  1. Click the Automatic Reseed button to automatically reseed the localization. If the robot still fails to localize, move it around. Alternatively, you can perform a manual reseed:

    1. Click the Reset Seed Frame button in the Localization widget.

      localization_seed_frame.png


    2. Drag the localization seed frame to the location where the robot actually is.

      1. Drag the center of the icon to translate the frame.

      2. Drag the edge of the icon to rotate the frame.

    3. Click the Manual Reseed button to reseed the localizer.

The Continuous Localization module keeps track of the current robot localization pose against the map. It uses an initial guess pose from the the Global Localization module and the relative pose from the odometry module to “continuously” localize the current scan. It a uses particle filter + ray tracing as core algorithms.

The Continuous Localization module can also detect failure cases by checking if the standard deviation of particles is too large; if so, it considers that frame failed. In total, N number of frames (set by the num_max_failure_frames parameter) is required to consider the final localization failed. At which point, the module will send a reset signal to global localization to request a bootstrap again.

To use continuous localizer, use the subgraph file sdk/extensions/localization/graphs/continuous_localization.subgraph.yaml

Simply set the receiver of flatscan (rx_flatscan) and connect rx_global_localization_result to the global localization result transmitter, and connect its reset signal transmitter tx_reset_signal to global localization’s reset signal receiver (rx_reset_signal)

  • Inputs: Flatscan, Odometry Relative Transform, 2D Occupancy Map

  • Output: Localization Result

Important Parameters:

The following are important parameters in the module:

  • absolute_predict_sigma (default: [0.04, 0.04, 0.0872] ([X, Y, Yaw]) ): A fixed sigma added to each update of the particles. Set this to a larger number when odometry is not reliable.

  • initial_sigma (default: [0.3, 0.3, 0.349066] ([X, Y, Yaw]) ): The initial sigma for the particles that are sampled around the global localization result. Set this to a larger number when global localization is not accurate.

  • max_particle_std_dev (default: 0.2): The threshold of the standard deviation for particles. If this threshold is exceeded, this frame is considered a failure frame.

  • num_max_failure_frames (default: 10): The total number of consecutive failure frames required to conclude that localization is totally lost. The module will then request bootstrap from the Global Localization module.

The localization result is the output of both global and continuous localization. It contains the localization pose and localization health. Before using the localization result, the robot first needs to check if the localization health is good.

Copy
Copied!
            

// The pose of the localization result. struct LocalizationPose { // The actual pose in 3d. ::isaac::Pose3d pose; // The uid of the lhs pose frame. PoseFrameUid lhs_frame_uid; // The uid of the rhs pose frame. PoseFrameUid rhs_frame_uid; }; // Health of the localization result struct LocalizationHealth { // Whether the localization is good or not. // If localization is not good then one should not use the localization result. bool is_localization_good = false; // The confidence is a value between 0.0 to 1.0, where 1.0 means most confident // 0.0 means least confident. double confidence = 0.0; }; // Stores the message for the localization result struct LocalizationResultMessageParts { // The message entity gxf::Entity entity; // The pose of the localization result. gxf::Handle<LocalizationPose> localization_pose; // The health of the localization result. gxf::Handle<LocalizationHealth> localization_health; // Timestamp for this message. gxf::Handle<gxf::Timestamp> timestamp; };

Localization may perform differently on different maps, so some parameter tuning may be required for each map.

Global Localization: max_output_error

Warning messages like the following may indicate that the global localization is not confident that it localized to the right location

Copy
Copied!
            

Best error: 0.499184 larger than threshold 0.350000, maybe outside the map?, re-initializing

You can try to change global localization param max_output_error to a larger value (e.g. 0.5 in above example) to determine whether global localization is indeed localized correctly. Note that increasing this threshold may cause global localization to be too confident and localize to the wrong location.

To change this parameter in the Navigation Stack app, append the below parameter override in the command line:

Copy
Copied!
            

-p robot\|navigation_stack\|global_localization.global_localization/grid_search_localizer/max_output_error=0.5


Global Localization: num_beams_gpu

This parameter controls how many beams of lidar scan you can use to compute the error for each sample in a grid search; it also controls the angular resolution of the search grid. The currently allowed values are 32, 64, 128, 256, 512, and the total run time scales linearly with the parameter value.

For example, the default value is currently 512. If you change this parameter to 256, the global localization run time will be half of the original, but the angular accuracy decreases by half as well, and the global localization may bootstrap to the wrong location as well.

We recommend experimenting with this parameter based on its use case.

To change this parameter in the Navigation Stack app, append the below parameter override in the command line:

Copy
Copied!
            

-p robot\|navigation_stack\|global_localization.global_localization/grid_search_localizer/num_beams_gpu=256


Continuous Localization: max_particle_std_dev

This parameter is a threshold of the final standard deviation for particles after localization. If this threshold is exceeded, it is considered a failure frame. After consecutive N number of failure frames, where N is larger than num_max_failure_frames, the Navigation Stack app concludes that continuous localization has failed. In this case, you will see a warning message:

Copy
Copied!
            

Detected 10 number of consecutive frames that has bad localization results, resetting

This means continuous localization has detected a failure; however, there can be false positives for failure detection. Depending on the lidar and the map quality, the particle standard deviation will likely be larger, in which case you may want to increase the max_particle_std_dev threshold so it doesn’t produce false positive failures.

To change this parameter in the Navigation Stack app, append the below parameter override in the command line:

Copy
Copied!
            

-p robot\|navigation_stack\|localization.continuous_localization/particle_filter_localization/max_particle_std_dev=0.3


Continuous Localization: num_particles

This parameter controls how many particles are used during localization; the larger the number is, the more accurate the localization result is; however, the run time also increase linearly with this parameter, so you will need to choose a value that matches your accuracy and runtime requirement on localization.

To change this parameter in the Navigation Stack app, append below parameter override in the command line:

Copy
Copied!
            

-p robot\|navigation_stack\|localization.continuous_localization/particle_filter_localization/num_particles=200


Localization in a Partial Map

You can change the occupancy threshold of localization to improve the performance of localization when an incomplete map is used. Set the threshold to a higher value than your occupancy threshold, such that it treats unknown points as non-occupied.

Note that this threshold is used to compute the binary occupancy map from the original occupancy image. The pixels that have a value smaller or equal to the threshold will become occupied; pixels that have a value higher than the threshold become unoccupied.

The following example illustrates how this threshold operates:

  • Assume the input occupancy image is 0(occupied), 125(unknown), 255(unoccupied).

  • Then, threshold it with 130 such that 0(occupied), 0(occupied), 255(unoccupied).

  • Or threshold it with 120 such that 0(occupied), 255(unoccupied), 255(unoccupied).

The following argument sets the occupancy threshold to 120 in the Navigation Stack app:

Copy
Copied!
            

-p robot\|navigation_stack\|localization_map_provider/occupancy_2d_map_provider/threshold=120


There are two tools for debugging localization: one is used in live testing; the other is used when replaying POD files.

Live Testing

To use the Live Testing tool, fetch the latest localization/live_testing app from the Docker registry, then run the tool using the Docker image:

Copy
Copied!
            

ssh nvidia@<ROBOT IP> docker run -it --gpus all --rm --network=host --privileged \ -v /dev:/dev -v /sys:/sys -v /tmp:/tmp -v /mnt/nova_ssd:/mnt/nova_ssd -v $HOME/.aws:/etc/aws \ -e AWS_CONFIG_FILE=/etc/aws/config \ -e AWS_SHARED_CREDENTIALS_FILE=/etc/aws/credentials \ nvcr.io/<YOUR_NGC_REPO_NAME>/extensions_localization_apps_live_testing:<TAG_NAME> \ --param=sensor_interface.recorder.recorder/file/file_path=/mnt/nova_ssd/recordings/sensor.pod \ --param=sensor_interface.recorder.uploader.s3_uploader/s3_uploader/bucket=nv-isaac-safety-test/localization_recordings \ --param=sensor_interface.recorder.disk_status_visualizer/nvidia::isaac::DiskStatusVisualizer/path=/mnt/nova_ssd/recordings

You can go to the Sight visualizer and see the localization outcome at http://localhost:3000 or http://<ROBOT_IP>:3000.

The Live Testing tool also allows you to record POD files when the app is running. To record a POD file, open http://localhost:3000/recorder.html. Start a new recording, using the same process as the regular Data Recorder, and make sure to add a “localization” tag in it.

The Live tool allows you to tune localization parameters quickly without having to run the entire Navigation Stack.

You can experiment with changing vehicle operation speed:

Copy
Copied!
            

--param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/speed_limit_linear=2.0 \ --param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/speed_limit_angular=2.0 \ --param=sensor_interface.segway_driver.driver/nvidia::isaac::SegwayDriver/speed_limit_linear=2.0 \ --param=sensor_interface.segway_driver.driver/nvidia::isaac::SegwayDriver/speed_limit_angular=2.0 \

You can also experiment with changing the threshold for localization failure detection

Copy
Copied!
            

--param=localization.continuous_localization.continuous_localization/particle_filter_localization/max_particle_std_dev=0.3 \ --param=localization.continuous_localization.continuous_localization/particle_filter_localization/num_particles=200 \ --param=localization.global_localization.global_localization/grid_search_localizer/num_beams_gpu=256 \ --param=localization.global_localization.global_localization/grid_search_localizer/max_output_error=0.5 \

Note that you need to use the following parameter override for the PS4 joystick:

Copy
Copied!
            

--param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/angular_speed_axis=3


Replay Testing

After recording, you can try to reproduce the problem with the replay tool. First you need to copy the pod file from robot to your host machine, then run following command to replay, note that you can replace the /tmp directory to any local directory that contains your pod file.

Copy
Copied!
            

ssh nvidia@<ROBOT IP> docker run -it --gpus all --rm --network=host --privileged \ -v /dev:/dev -v /sys:/sys -v /tmp:/tmp \ nvcr.io/<YOUR_NGC_REPO_NAME>/extensions_localization_apps_replay_testing:<TAG_NAME> \ --param=sensor_interface.replayer.pod_replayer/file/file_path=/tmp/<pod_file_name.pod>

The replay tool can be useful for param tuning as well, especially for tuning those localization specific params, e.g. max_particle_std_dev

NvIsaacLocalizationExtension

Extension containing general localization related components.

nvidia::isaac::localization::StaticPosePublisher

A codelet that publishes static pose between two frames

Parameters:

name

LHS Frame

lookup_name

lhs_frame

description

The LHS frame of the pose

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

RHS Frame

lookup_name

rhs_frame

description

The RHS frame of the pose

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Atlas Frontend

lookup_name

atlas

description

Handle to atlas frontend to post poses

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Message Receiver

lookup_name

rx

description

The receiver of the message which we will get timestamp from

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Static Pose

lookup_name

static_pose

description

The pose which it poses to the tree

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_CUSTOM

default

N/A

nvidia::isaac::localization::TransformComparator

A codelet that compares transform between frames based on the ground truth pose

Parameters:

name

Input RHS Frame

lookup_name

input_lhs_frame

description

The lhs frame of the input

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Input LHS Frame

lookup_name

input_rhs_frame

description

The rhs frame of input

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Ground Truth LHS Frame

lookup_name

ground_truth_lhs_frame

description

The lhs frame of the ground truth

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Ground Truth RHS Frame

lookup_name

ground_truth_rhs_frame

description

The rhs frame of the ground truth

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Atlas Frontend

lookup_name

atlas

description

Handle to atlas frontend to post poses

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Transform Diff Transmitter

lookup_name

tx_transform_diff

description

Handle to the output transform difference

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Memory Allocator

lookup_name

allocator

description

Handle to the memory allocator pool used for data generation

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Evaluate Relative Transform

lookup_name

evaluate_relative_transform

description

If set to true, we evaluate relative transform between frames at two different timestamp e.g. odom_T_robot_t1.inverse() * odom_T_robot_t0 . If set to false, we evaluate the absolute pose difference e.g. odom_T_robot_gt.inverse() * odom_T_robot

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_BOOL

default

False

name

Delay Seconds

lookup_name

delay_seconds

description

The delay in seconds of querying poses

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_FLOAT64

default

0.0

name

Comparator Name

lookup_name

comparator_name

description

The name of this comparator

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_STRING

default

name

Verbose Info

lookup_name

verbose_info

description

whether or not print verbose info about the transform it get

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_BOOL

default

False

name

Receiver

lookup_name

rx

description

If specified, triggers evaluation whenever receive message

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

nvidia::isaac::localization::LocalizationResultChecker

A codelet that check if input localization result matches with expected

Parameters:

name

Localization Result Receiver

lookup_name

rx_localization_result

description

Localization Result Receiver

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Translation Tolerance

lookup_name

translation_tolerance

description

The translation tolerance between the localization pose and expected pose, unit in meters

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_FLOAT64

default

0.001

name

Rotation Tolerance

lookup_name

rotation_tolerance

description

The rotation tolerance between the localization pose and expected pose, unit in radian

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_FLOAT64

default

0.0001

name

Expected Pose

lookup_name

expected_pose

description

The expected pose of the localization result, if provided, we will use this expected pose to check the localization result instead of getting it from pose tree

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_CUSTOM

default

N/A

name

Expected LHS Frame

lookup_name

expected_pose_lhs_frame

description

The lhs frame of the expected pose

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Expected RHS Frame

lookup_name

expected_pose_rhs_frame

description

The rhs frame of the expected pose

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Pose Tree

lookup_name

pose_tree

description

Pose tree to get the expected pose

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Verbose Logging

lookup_name

verbose_logging

description

Whether to verbosely log more info about the checking

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_BOOL

default

False

nvidia::isaac::localization::MultiplePosesPublisher

A codelet that publish multiple input poses

Parameters:

name

Left Frame

lookup_name

right_frame

description

The left frame of the pose

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Right Frame

lookup_name

left_frame

description

The right frame in the pose

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Pose Tree

lookup_name

pose_tree

description

Pose tree to set the expected pose

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Pose Ready Signal Transmitter

lookup_name

tx_pose_ready_signal

description

The transmitter to let receiver know a new pose is ready

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Poses

lookup_name

poses

description

The poses it publishes

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_CUSTOM

default

N/A

name

Boolean Scheduling Term

lookup_name

boolean_scheduling_term

description

Scheduling term to stop execution after all messages are replayed

flags

GXF_PARAMETER_FLAGS_OPTIONAL

type

GXF_PARAMETER_TYPE_HANDLE

default

N/A

name

Pose Delta Time

lookup_name

pose_delta_time

description

The delta time between poses, the first pose has time starting from this delta time, if set to 0.0, execution time is used

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_FLOAT64

default

0.2

name

Verbose Logging

lookup_name

verbose_logging

description

Whether to verbosely log more info about the checking

flags

GXF_PARAMETER_FLAGS_NONE

type

GXF_PARAMETER_TYPE_BOOL

default

False

© Copyright 2018-2023, NVIDIA Corporation. Last updated on Sep 11, 2023.