The Localization module provides location information of a robot in a provided map. This allows for accurate localization, which is essential for planning and navigation.
Currently, this module only supports 2D lidar-scan localization.
If one wants to use the whole package of localization, one can use subgraph file sdk/extensions/localization/graphs/localization.subgraph.yaml
Simply set flatscan receiver (rx_flatscan
) of the subgraph, and connect your localization result receiver to localization result transmitter (tx_localization_result
).
The Global Localization module bootstraps the localization process without any initial guess pose. It uses a grid search algorithm, which samples the map based on the input grid size and exhaustively searches each grid and each angle within the grid. The GPU is used to parallelize the search for the samples, and ray tracing is used to evaluate each sample.
Inputs: Flatscan, 2D Occupancy Map
Output: Localization Result
To use the Global Localization module, use the subgraph file sdk/extensions/localization/graphs/global_localization.subgraph.yaml
Simply set the flatscan receiver (rx_flatscan
) and reset signal receiver (rx_reset_signal
) of the subgraph interface.
And connect the localization result transmitter (tx_localization_result
) to your own receiver.
Whenever the reset signal receiver receives an entity, it resets the grid search localizer to its initial state
so it can run global localization again.
The following are important parameters in the module:
num_beams_gpu
(default: 512): The GPU accelerated scan-and-match function can only handle a certain number of beams per range scan. The allowed values are {32, 64, 128, 256, 512}. If the number of beams in the range scan does not match this number, a subset of beams will be taken. Since yaw is shifted based on the beam direction to reduce cost of ray tracing, this number also affects the output yaw resolution; a smaller number will result in a sparser or less accurate the yaw.batch_size
(default: 512): The number of concurrent pose samples sent as a batch to the GPU kernel. Choose a value that matches the GPU memory of the robot platform.sample_distance
(default: 0.1): The distance between sample points in meters. A smaller number will result in more sample poses being incorporated, leading to higher accuracy and lower performance.patch_size
(default: [512, 512] ): The size, in occupancy map rows/columns, of each patch used for tracingmax_output_error
(default: 0.45 ): The max output error from our best sample, if output error larger than this threshold, we conclude localization failed
Localization Widget
The Localization Widget is an Isaac Sight module that can be used to reseed (bootstrap) localization.
Global localization can fail to correctly localize the robot when multiple similar environments exist in the map (e.g. office cubicles, meeting rooms). There is currently no automated solution to this issue, so you will need to provide user input as an initial guess to assist localization.
Follow these steps to perform manual global localization with the Localization Widget:
Open Isaac Sight at
http://localhost:3000/
.Open the Localization widget.

Select which localizer to operate.

Click the Stop Localizer button to stop the robot from moving.

Click the Automatic Reseed button to automatically reseed the localization. If the robot still fails to localize, move it around. Alternatively, you can perform a manual reseed:
Click the Reset Seed Frame button in the Localization widget.
Drag the localization seed frame to the location where the robot actually is.
Drag the center of the icon to translate the frame.
Drag the edge of the icon to rotate the frame.
Click the Manual Reseed button to reseed the localizer.
The Continuous Localization module keeps track of the current robot localization pose against the map. It uses an initial guess pose from the the Global Localization module and the relative pose from the odometry module to “continuously” localize the current scan. It a uses particle filter + ray tracing as core algorithms.
The Continuous Localization module can also detect failure cases by checking if the standard deviation of particles
is too large; if so, it considers that frame failed. In total, N number of frames (set by the num_max_failure_frames
parameter) is required to consider the final localization failed. At which point, the module will send a reset signal to
global localization to request a bootstrap again.
To use continuous localizer, use the subgraph file sdk/extensions/localization/graphs/continuous_localization.subgraph.yaml
Simply set the receiver of flatscan (rx_flatscan
) and connect rx_global_localization_result
to the global localization result transmitter, and connect its
reset signal transmitter tx_reset_signal
to global localization’s reset signal receiver (rx_reset_signal
)
Inputs: Flatscan, Odometry Relative Transform, 2D Occupancy Map
Output: Localization Result
Important Parameters:
The following are important parameters in the module:
absolute_predict_sigma
(default: [0.04, 0.04, 0.0872] ([X, Y, Yaw]) ): A fixed sigma added to each update of the particles. Set this to a larger number when odometry is not reliable.initial_sigma
(default: [0.3, 0.3, 0.349066] ([X, Y, Yaw]) ): The initial sigma for the particles that are sampled around the global localization result. Set this to a larger number when global localization is not accurate.max_particle_std_dev
(default: 0.2): The threshold of the standard deviation for particles. If this threshold is exceeded, this frame is considered a failure frame.num_max_failure_frames
(default: 10): The total number of consecutive failure frames required to conclude that localization is totally lost. The module will then request bootstrap from the Global Localization module.
The localization result is the output of both global and continuous localization. It contains the localization pose and localization health. Before using the localization result, the robot first needs to check if the localization health is good.
// The pose of the localization result.
struct LocalizationPose {
// The actual pose in 3d.
::isaac::Pose3d pose;
// The uid of the lhs pose frame.
PoseFrameUid lhs_frame_uid;
// The uid of the rhs pose frame.
PoseFrameUid rhs_frame_uid;
};
// Health of the localization result
struct LocalizationHealth {
// Whether the localization is good or not.
// If localization is not good then one should not use the localization result.
bool is_localization_good = false;
// The confidence is a value between 0.0 to 1.0, where 1.0 means most confident
// 0.0 means least confident.
double confidence = 0.0;
};
// Stores the message for the localization result
struct LocalizationResultMessageParts {
// The message entity
gxf::Entity entity;
// The pose of the localization result.
gxf::Handle<LocalizationPose> localization_pose;
// The health of the localization result.
gxf::Handle<LocalizationHealth> localization_health;
// Timestamp for this message.
gxf::Handle<gxf::Timestamp> timestamp;
};
Localization may perform differently on different maps, so some parameter tuning may be required for each map.
Global Localization: max_output_error
Warning messages like the following may indicate that the global localization is not confident that it localized to the right location
Best error: 0.499184 larger than threshold 0.350000, maybe outside the map?, re-initializing
You can try to change global localization param max_output_error
to a larger value (e.g. 0.5 in
above example) to determine whether global localization is indeed localized correctly. Note that
increasing this threshold may cause global localization to be too confident and localize to the
wrong location.
To change this parameter in the Navigation Stack app, append the below parameter override in the command line:
-p robot\|navigation_stack\|global_localization.global_localization/grid_search_localizer/max_output_error=0.5
Global Localization: num_beams_gpu
This parameter controls how many beams of lidar scan you can use to compute the error for each sample in a grid search; it also controls the angular resolution of the search grid. The currently allowed values are 32, 64, 128, 256, 512, and the total run time scales linearly with the parameter value.
For example, the default value is currently 512. If you change this parameter to 256, the global localization run time will be half of the original, but the angular accuracy decreases by half as well, and the global localization may bootstrap to the wrong location as well.
We recommend experimenting with this parameter based on its use case.
To change this parameter in the Navigation Stack app, append the below parameter override in the command line:
-p robot\|navigation_stack\|global_localization.global_localization/grid_search_localizer/num_beams_gpu=256
Continuous Localization: max_particle_std_dev
This parameter is a threshold of the final standard deviation for particles after localization.
If this threshold is exceeded, it is considered a failure frame.
After consecutive N number of failure frames, where N is larger than num_max_failure_frames
,
the Navigation Stack app concludes that continuous localization has failed. In this case, you
will see a warning message:
Detected 10 number of consecutive frames that has bad localization results, resetting
This means continuous localization has detected a failure; however, there can be false positives for
failure detection. Depending on the lidar and the map quality, the particle standard deviation will
likely be larger, in which case you may want to increase the max_particle_std_dev
threshold so
it doesn’t produce false positive failures.
To change this parameter in the Navigation Stack app, append the below parameter override in the command line:
-p robot\|navigation_stack\|localization.continuous_localization/particle_filter_localization/max_particle_std_dev=0.3
Continuous Localization: num_particles
This parameter controls how many particles are used during localization; the larger the number is, the more accurate the localization result is; however, the run time also increase linearly with this parameter, so you will need to choose a value that matches your accuracy and runtime requirement on localization.
To change this parameter in the Navigation Stack app, append below parameter override in the command line:
-p robot\|navigation_stack\|localization.continuous_localization/particle_filter_localization/num_particles=200
Localization in a Partial Map
You can change the occupancy threshold of localization to improve the performance of localization when an incomplete map is used. Set the threshold to a higher value than your occupancy threshold, such that it treats unknown points as non-occupied.
Note that this threshold is used to compute the binary occupancy map from the original occupancy image. The pixels that have a value smaller or equal to the threshold will become occupied; pixels that have a value higher than the threshold become unoccupied.
The following example illustrates how this threshold operates:
Assume the input occupancy image is
0(occupied), 125(unknown), 255(unoccupied)
.Then, threshold it with 130 such that
0(occupied), 0(occupied), 255(unoccupied)
.Or threshold it with 120 such that
0(occupied), 255(unoccupied), 255(unoccupied)
.
The following argument sets the occupancy threshold to 120 in the Navigation Stack app:
-p robot\|navigation_stack\|localization_map_provider/occupancy_2d_map_provider/threshold=120
There are two tools for debugging localization: one is used in live testing; the other is used when replaying POD files.
Live Testing
To use the Live Testing tool, fetch the latest localization/live_testing
app from the
Docker registry, then run the tool using the Docker image:
ssh nvidia@<ROBOT IP>
docker run -it --gpus all --rm --network=host --privileged \
-v /dev:/dev -v /sys:/sys -v /tmp:/tmp \
nvcr.io/<YOUR_NGC_REPO_NAME>/extensions_localization_apps_live_testing:<TAG_NAME> \
--param=sensor_interface.recorder.uploader.s3_uploader/s3_uploader/bucket=<YOUR_S3_PATH> \
--param=map/image_loader/filename=<YOUR_MAP_FILE>
You can go to the Sight visualizer and see the localization outcome at http://localhost:3000
or
http://<ROBOT_IP>:3000
.
The Live Testing tool also allows you to record POD files when the app is running. To record a POD
file, open http://localhost:3000/recorder.html
. Start a new recording, using the same process as
the regular Data Recorder, and make sure to add a “localization” tag in it.
The Live tool allows you to tune localization parameters quickly without having to run the entire Navigation Stack.
You can experiment with changing vehicle operation speed:
--param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/speed_limit_linear=2.0 \
--param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/speed_limit_angular=2.0 \
--param=sensor_interface.segway_driver.driver/nvidia::isaac::SegwayDriver/speed_limit_linear=2.0 \
--param=sensor_interface.segway_driver.driver/nvidia::isaac::SegwayDriver/speed_limit_angular=2.0 \
You can also experiment with changing the threshold for localization failure detection
--param=localization.continuous_localization.continuous_localization/particle_filter_localization/max_particle_std_dev=0.3 \
--param=localization.continuous_localization.continuous_localization/particle_filter_localization/num_particles=200 \
--param=localization.global_localization.global_localization/grid_search_localizer/num_beams_gpu=256 \
--param=localization.global_localization.global_localization/grid_search_localizer/max_output_error=0.5 \
Note that you need to use the following parameter override for the PS4 joystick:
--param=sensor_interface.robot_remote_control.robot_remote_control/robot_remote_control/angular_speed_axis=3
Replay Testing
After recording, you can try to reproduce the problem with the replay tool:
ssh nvidia@<ROBOT IP>
docker run -it --gpus all --rm --network=host --privileged \
-v /dev:/dev -v /sys:/sys -v /tmp:/tmp \
nvcr.io/<YOUR_NGC_REPO_NAME>/extensions_localization_apps_replay_testing:<TAG_NAME> \
--param=sensor_interface.replayer.pod_replayer/file/file_path=<POD_FILE_PATH> \
--param=map/image_loader/filename=<YOUR_MAP_FILE>
The replay tool can be useful for param tuning as well, especially for tuning those localization specific params, e.g. max_particle_std_dev
NvIsaacLocalizationExtension
Extension containing general localization related components.
nvidia::isaac::localization::StaticPosePublisher
A codelet that publishes static pose between two frames
Parameters:
- name
- lookup_name
- description
- flags
- type
- default
LHS Frame
lhs_frame
The LHS frame of the pose
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
RHS Frame
rhs_frame
The RHS frame of the pose
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Atlas Frontend
atlas
Handle to atlas frontend to post poses
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Message Receiver
rx
The receiver of the message which we will get timestamp from
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Static Pose
static_pose
The pose which it poses to the tree
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_CUSTOM
N/A
nvidia::isaac::localization::TransformComparator
A codelet that compares transform between frames based on the ground truth pose
Parameters:
- name
- lookup_name
- description
- flags
- type
- default
Input RHS Frame
input_lhs_frame
The lhs frame of the input
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Input LHS Frame
input_rhs_frame
The rhs frame of input
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Ground Truth LHS Frame
ground_truth_lhs_frame
The lhs frame of the ground truth
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Ground Truth RHS Frame
ground_truth_rhs_frame
The rhs frame of the ground truth
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Atlas Frontend
atlas
Handle to atlas frontend to post poses
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Transform Diff Transmitter
tx_transform_diff
Handle to the output transform difference
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Memory Allocator
allocator
Handle to the memory allocator pool used for data generation
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Evaluate Relative Transform
evaluate_relative_transform
If set to true, we evaluate relative transform between frames at two different timestamp e.g. odom_T_robot_t1.inverse() * odom_T_robot_t0 . If set to false, we evaluate the absolute pose difference e.g. odom_T_robot_gt.inverse() * odom_T_robot
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_BOOL
False
- name
- lookup_name
- description
- flags
- type
- default
Delay Seconds
delay_seconds
The delay in seconds of querying poses
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_FLOAT64
0.0
- name
- lookup_name
- description
- flags
- type
- default
Comparator Name
comparator_name
The name of this comparator
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_STRING
- name
- lookup_name
- description
- flags
- type
- default
Verbose Info
verbose_info
whether or not print verbose info about the transform it get
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_BOOL
False
- name
- lookup_name
- description
- flags
- type
- default
Receiver
rx
If specified, triggers evaluation whenever receive message
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_HANDLE
N/A
nvidia::isaac::localization::LocalizationResultChecker
A codelet that check if input localization result matches with expected
Parameters:
- name
- lookup_name
- description
- flags
- type
- default
Localization Result Receiver
rx_localization_result
Localization Result Receiver
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Translation Tolerance
translation_tolerance
The translation tolerance between the localization pose and expected pose, unit in meters
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_FLOAT64
0.001
- name
- lookup_name
- description
- flags
- type
- default
Rotation Tolerance
rotation_tolerance
The rotation tolerance between the localization pose and expected pose, unit in radian
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_FLOAT64
0.0001
- name
- lookup_name
- description
- flags
- type
- default
Expected Pose
expected_pose
The expected pose of the localization result, if provided, we will use this expected pose to check the localization result instead of getting it from pose tree
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_CUSTOM
N/A
- name
- lookup_name
- description
- flags
- type
- default
Expected LHS Frame
expected_pose_lhs_frame
The lhs frame of the expected pose
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Expected RHS Frame
expected_pose_rhs_frame
The rhs frame of the expected pose
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Pose Tree
pose_tree
Pose tree to get the expected pose
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Verbose Logging
verbose_logging
Whether to verbosely log more info about the checking
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_BOOL
False
nvidia::isaac::localization::MultiplePosesPublisher
A codelet that publish multiple input poses
Parameters:
- name
- lookup_name
- description
- flags
- type
- default
Left Frame
right_frame
The left frame of the pose
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Right Frame
left_frame
The right frame in the pose
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Pose Tree
pose_tree
Pose tree to set the expected pose
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Pose Ready Signal Transmitter
tx_pose_ready_signal
The transmitter to let receiver know a new pose is ready
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Poses
poses
The poses it publishes
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_CUSTOM
N/A
- name
- lookup_name
- description
- flags
- type
- default
Boolean Scheduling Term
boolean_scheduling_term
Scheduling term to stop execution after all messages are replayed
GXF_PARAMETER_FLAGS_OPTIONAL
GXF_PARAMETER_TYPE_HANDLE
N/A
- name
- lookup_name
- description
- flags
- type
- default
Pose Delta Time
pose_delta_time
The delta time between poses, the first pose has time starting from this delta time, if set to 0.0, execution time is used
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_FLOAT64
0.2
- name
- lookup_name
- description
- flags
- type
- default
Verbose Logging
verbose_logging
Whether to verbosely log more info about the checking
GXF_PARAMETER_FLAGS_NONE
GXF_PARAMETER_TYPE_BOOL
False