This tutorial will guide you through the process of downloading the Navigation Stack on your machine and running a simple demo with the Carter platform in Flatsim.
Flatsim is a simple engine for simulating robots and their sensors in a flat, 2D world. It is intended for quick and easy evaluation of robotics algorithms; for more complex scenarios involving non-opaque materials and reflections, use Isaac Sim.
An NVIDIA GPU that supports CUDA 11.8
Ubuntu 20.04
NVIDIA Container Registry: Isaac provides a prebuilt Docker image that contains the entire Navigation Stack. Refer to the Troubleshooting and FAQs section for instructions on installing Docker and setting up access to the NVIDIA Container Registry.
The Navigation Stack has many configuration options. This
tutorial focuses on running the navigation stack with lidar-based localization and wheel odometry in
Flatsim. First, create a configuration file at /tmp/config.yaml
with the following contents.
# Configuration of what robot and physics engine is used.
robot: carter
physics_engine: flatsim
# Configuration of the algorithms that are used in the navigation stack.
localization: lidar
waypoint_graph_generator: grid-map
route_planner: onboard
path_planner: grid-map
linear_speed_limit: 1.111 # 4 kph
# Configuration specific to the environment that the robot is running in.
environment: test
omap_path: apps/assets/maps/test.png
omap_cell_size: 0.1
robot_initial_gt_pose:
translation: [17.5, 15.0, 0.0]
rotation_rpy: [0.0, 0.0, 90.0]
This configuration simulates a robot in Flatsim and uses the Navigation Stack to maneuver it to a target position. Once the target position is reached, it will automatically select a new random target position and navigate there.
After creating the configuration file, run the following command:
docker run -it --gpus all --rm --network=host -v /tmp:/tmp \
nvcr.io/<your_staging_area>/navigation_stack:isaac_2.0-<platform> \
-c /tmp/config.yaml
Replace <your_staging_area>
with your assigned NGC Registry staging
area code.
Replace <platform>
with the platform architecture of the system you are running the
Docker container on. For x86 use k8
, for ARM use aarch64.
Use a web browser to navigate to http://<ip_address>:3000
, where
<ip_address>
is the IP address of the machine on which the Docker container is running.
You should see a webpage similar to the following.

Fig. 7 Sight visualization of the Navigation Stack running with an occupancy grid map in Flatsim
This webpage contains the Websight visualization of the navigation stack. You should see the occupancy map and the robot moving autonomously towards the goal point, which is visualized with a red circle. You can reposition the goal by clicking and dragging the marker, causing the robot to replan its path and move to the new goal. Note that once you have moved the marker to a new position, you have to click elsewhere for the new goal to be registered.
Refer to the Isaac Sight documentation page for more details about using visualization, including a walkthrough video.
So far, the Navigation Stack has been using an occupancy grid map for path planning. Isaac AMR also allows path planning with semantic maps, which can be used to create hand-annotated zones where a robot is not allowed to go. To use a semantic map, repeat the steps above using the following config file:
# Configuration of what robot and physics engine is used.
robot: carter
physics_engine: flatsim
# Configuration of the algorithms that are used in the navigation stack.
localization: lidar
waypoint_graph_generator: semantic-map
route_planner: onboard
path_planner: semantic-map
# Configuration specific to the environment that the robot is running in.
environment: test
omap_path: apps/assets/maps/test.png
omap_cell_size: 0.1
semantic_map_path: apps/assets/maps/test_semantic_map.json
semantic_map_initial_pose:
translation: [0.0, 0.0, 0.0]
rotation_rpy: [0.0, 0.0, 0.0]
robot_initial_gt_pose:
translation: [17.5, 15.0, 0.0]
rotation_rpy: [0.0, 0.0, 90.0]
Again, you can use your mouse to click-and-drag the goal marker in the Sight visualization.
To visualize the semantic regions, enable the visualization channel
single_robot_demo/semantic_map|visualizer/sight/semantic_map_polygons
by clicking the empty box
to the left of the channel name below the map visualization.
You will see colored polygons visualized on top of the map:
Red: An obstacle or inaccessible area
Light blue: A room
Dark blue: An entrance
Green: A generic navigable surface
Try dragging the goal marker into the red region in the bottom left corner of the map: The robot will refuse to plan a path to this location because the red region denotes an inaccessible region where the robot is not allowed to go.

Fig. 8 Sight visualization of the navigation stack running with a semantic map in Flatsim