Navigation Stack

The Navigation Stack enables robots to perform perform autonomous missions. It is a versatile application that supports several configuration options. The Navigation Stack can be used on a real robot or in simulation with Flatsim or Isaac Sim. It can be used as a standalone app or together with Isaac AMR Cloud services, which allows entire fleets of robots to be coordinated.

The Navigation Stack consists of the following algorithmic modules.

State Estimation

The estimation pipeline estimates the current pose and velocity of the robot. It consists of the following:

  • Odometry Estimator: Estimates the robot’s odometry (pose and velocity). The estimated pose is guaranteed to be locally consistent but may drift. The navigation stack uses a wheel-inertial based odometry estimation.

  • Localizer: Determines the robot’s pose with respect to a map. The obtained pose does not observe drift, but it may jump. Currently, only LIDAR-based localization is supported.


The perception pipeline observes the robot’s surroundings. It consists of the following:

  • Distance Map Perception: Generates a distance map including all the static and dynamic obstacles in proximity of the robot. Currently, only LIDAR-based perception is supported.

Planning & Control

The planning and control pipeline plans and executes the path towards a goal. It consists of the following:

  • Global Route Planner: Creates a high-level plan for the robot’s navigation. The planner uses a waypoint graph to generate a coarse sequence of waypoints connecting the robot’s current location to its destination.

  • Regional Path Planner: Generates a local plan for the robot’s navigation. The planner uses a hybrid A* search on a locally constructed graph to find a path for the next 50 meters. This path also takes into account dynamic obstacles that are not seen on the map.

  • Speed Decision Maker: Consumes the path from regional path planners and determines the speed to send to the trajectory planner.

  • Trajectory Planner: Generates a trajectory for the robot to follow based on the local plan from the Regional Path Planner. The planner uses an iLQR algorithm to generate a trajectory for the next 5 meters.

  • Controller: Instructs the robot to follow the trajectory generated by the Trajectory Planner. The controller adjusts the robot’s velocity and steering to keep it on the desired path.

Behavior Executor

The Behavior Executor operates on top of all the previously described components. It uses a behavior tree to coordinate the other navigation modules and generates tasks for the navigation stack.

flowchart TD IRobot[Robot Hardware] ORobot[Robot Hardware] subgraph NavStack[Onboard Navigation Stack] subgraph StateEstimation[State Estimation] direction TB Odom[Odometry Estimator] Loc[Localizer] end subgraph Perception direction TB Perc[Local Perception] end subgraph PC[Planning & Control] direction TB Perc[Local Perception] GP[Global Route Planner] SRE[Semantic Region Extractor] RPP[Regional Path Planner] SDM[Speed Decision Maker] TP[Trajectory Planner] Controller[Controller] end end IRobot --> |Imu Measurement| NavStack IRobot --> |Wheel Measurement| NavStack IRobot --> |RGB Images| NavStack IRobot --> |Depth Images| NavStack StateEstimation --> |Odometry| Perception StateEstimation --> |Robot Pose| Perception Odom --> Loc Perception --> |Distance Map| PC GP --> SRE SRE --> RPP RPP --> SDM SDM --> TP TP --> Controller NavStack --> |Command| ORobot

Fig. 9 Overview of the modules in the navigation stack.

The Navigation Stack has a number of configuration options that allow you to employ it in different situations and use cases.

To configure the Navigation Stack, pass it different command line arguments or a YAML configuration file.

CLI Arguments


usage: [-h] [--robot {carter}] [--physics-engine {flatsim,isaac-sim,real-world}] [--environment ENVIRONMENT] [--odometry {wheel_imu}] [--localization {lidar}] [--perception {lidar-egm}] [--route-planner {onboard,cloud}] [--waypoint-graph-generator {grid-map,semantic-map}] [--path-planner {grid-map,semantic-map}] [--omap-path OMAP_PATH] [--omap-cell-size OMAP_CELL_SIZE] [--semantic-map-path SEMANTIC_MAP_PATH] [--semantic-map-initial-pose SEMANTIC_MAP_INITIAL_POSE] [--robot-goal-pose ROBOT_GOAL_POSE] [--omap-to-world-transform OMAP_TO_WORLD_TRANSFORM] [--omap-to-world-invert-transform OMAP_TO_WORLD_INVERT_TRANSFORM] [--robot-initial-gt-pose ROBOT_INITIAL_GT_POSE] [--cloud-host CLOUD_HOST] [--robot-name ROBOT_NAME] [--world-T-map-pose WORLD_T_MAP_POSE] [--dry-run] [--log-level {CRITICAL,FATAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}] [--config-path CONFIG_PATH] [--omap-tile-metadata-file OMAP_TILE_METADATA_FILE] [--tile-config TILE_CONFIG] [--linear-speed-limit LINEAR_SPEED_LIMIT] [--sight-config-path SIGHT_CONFIG_PATH] [--enable-metrics-upload ENABLE_METRICS_UPLOAD] [--param [PARAMETERS]] optional arguments: -h, --help show this help message and exit --robot {carter}, -r {carter} The type of robot being used with the navigation stack. --physics-engine {flatsim,isaac-sim,real-world} The physics engine used to simulate the robot. --environment ENVIRONMENT The environment that the robot is running in. You can use any freeform string, but make sure to use the same string for repeated runs in the same environment when running in the same environment as this string is used to aggregate metrics for an environment. Example: `my_warehouse_1` --odometry {wheel_imu} The type of odometry fusion used. --localization {lidar} The type of localization/odometry used. --perception {lidar-egm} The type of perception/distance maps used. --route-planner {onboard,cloud} The type of route planner used. --waypoint-graph-generator {grid-map,semantic-map} The type of waypoint graph generator used. --path-planner {grid-map,semantic-map} The type of path planner used. --omap-path OMAP_PATH The path of the occupancy map --omap-cell-size OMAP_CELL_SIZE The size of a cell in the occupancy map in meters. --semantic-map-path SEMANTIC_MAP_PATH The path of the semantic map. --semantic-map-initial-pose SEMANTIC_MAP_INITIAL_POSE The semantic map initial pose in the world frame. Expected format: `{'translation': [0.0, 0.0, 0.0], 'rotation_rpy': [0.0, 0.0, 0.0]}` --robot-goal-pose ROBOT_GOAL_POSE The robot goal pose (pose to which the robot tries to navigate) in omap frame. Must be set when running in Isaac Sim. Expected format: `{'translation': [0.0, 0.0, 0.0], 'rotation_rpy': [0.0, 0.0, 0.0]}` --omap-to-world-transform OMAP_TO_WORLD_TRANSFORM The occupancy map to world transform (only used if --omap-tile-metadata-file not defined). Expected format: `{'translation': [0.0, 0.0, 0.0], 'rotation_rpy': [0.0, 0.0, 0.0]}` --omap-to-world-invert-transform OMAP_TO_WORLD_INVERT_TRANSFORM Whether to invert the occupancy map to world transform (only used if --omap-tile-metadata-file not defined). --robot-initial-gt-pose ROBOT_INITIAL_GT_POSE The robot initial ground truth pose. Must be set when running in Flatsim. Ignored in Isaac Sim. Expected format: `{'translation': [0.0, 0.0, 0.0], 'rotation_rpy': [0.0, 0.0, 0.0]}` --cloud-host CLOUD_HOST Host name or IP of the machine that runs Isaac AMR Cloud Services. --robot-name ROBOT_NAME Name of the robot to use in Isaac AMR Cloud Services --world-T-map-pose WORLD_T_MAP_POSE The pose of the world frame in the map frame. Useful for Isaac Sim. --dry-run Dry run the graph. This can be used to lint-check the graph. --log-level {CRITICAL,FATAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET} Set the log level of the Python API that builds the application. --config-path CONFIG_PATH, -c CONFIG_PATH The path to the config file. The config file can be used to store default arguments for the CLI args, s.t. they don't have to be retyped everytime. --omap-tile-metadata-file OMAP_TILE_METADATA_FILE The path to the omap tile metadata file. --tile-config TILE_CONFIG The configuration options for tile loading. Expected format `{'map_dir': '/tmp', 'config_dir': '/tmp'}` --linear-speed-limit LINEAR_SPEED_LIMIT The maximum linear speed at which robot is allowed to travel. --sight-config-path SIGHT_CONFIG_PATH The path to the sight config file. The config file can be used to store default arguments for the sight configuration. --enable-metrics-upload ENABLE_METRICS_UPLOAD If true the metrics will be uploaded. Currently only upload to Kratos is supported. --param [PARAMETERS], -p [PARAMETERS] Parameter overrides in the form `entity/component/parameter=value`. The argument can be repeated to override multiple parameters. For example `-p entity/component/parameter1=value1 -p entity/component/parameter2=value2`.

Configuration File

Instead of using the CLI, you can create a YAML file that contains command line options. You can then pass the single CLI argument -c <path/to/config>, and the Navigation Stack will use the configuration from the YAML file. When using a YAML config file, you can still pass CLI arguments, which will override the values provided in the YAML file.

For example if you were using the CLI arguments --robot carter --physics-engine flatsim --omap-path /path/to/my/map.png this would correspond to the following yaml file:


robot: carter physics_engine: flatsim omap_path: /path/to/my/map.png

Note that the hyphens in the CLI arguments have been converted to underscores in the YAML file.

Running the Navigation Stack with a Configuration File

To use a custom configuration file with the Navigation Stack, first create the config file containing the exact configuration you want to use. This file can be created anywhere, but for simplicity we recommend using /tmp/navigation_stack_configuration.yaml.

Add the desired configuration parameters to the file, following the syntax indicated above.

Then, pull and run the image containing the Navigation Stack, mounting your /tmp directory so that your new config file is available for the Navigation Stack to use.


docker run -it --gpus all --rm --network=host -v /tmp:/tmp \<your_staging_area>/navigation_stack:isaac_2.0-<platform> -c \ /tmp/navigation_stack_configuration.yaml


Replace <your_staging_area> with your assigned NGC Registry staging area code.


Replace <platform> with the platform architecture of the system you are running the docker container on. For x86 use k8, for ARM use aarch64.

© Copyright 2018-2023, NVIDIA Corporation. Last updated on Sep 11, 2023.