Running the Navigation Stack on Carter

This section will guide you through running the Navigation Stack on a Carter robot using an Occupancy Map.

You will need the following before starting navigation:

  1. Carter V2.3 Robot with 2TB Storage.

  2. PC (with Browser) connected to the same WiFi Network as the robot

  3. Joystick connected to the robot

  4. Occupancy Map obtained from the Mapping workflow

  5. Internet Access.

  6. Access to the NVIDIA Container Registry

Verifying the Map

You should have received three map files from NVIDIA:

  • The 2D occupancy map (.png)

  • [optional] The 3D occupancy map (.ply)

  • [optional] The semantic map file (.json), which will be loaded/viewed in the semantic labeling tool

The 2D Occupancy map (.png) and semantic labels (.json) will be used when customizing the map for the Navigation app.

You can visually check if the map is correct by opening the .png file in a web browser. Make sure it matches the layout of your environment.

You can also visually inspect the 3D Voxel map (.ply) using any freely available tool.

Using only Occupancy Maps

  1. Make sure the maps that you received are on the robot. We recommend storing them in /tmp. If your map is not yet on the robot, you can send it from your laptop to the robot using scp:


    scp <path/to/map/on/laptop> nvidia@<ROBOT IP>:/tmp/

    This will copy the map from your PC to the robot’s /tmp directory.

  2. SSH into the Carter robot:


    ssh nvidia@<ROBOT IP>

  3. Create a config file containing the exact configuration you want to use for the Navigation Stack. We recommend saving this config file in the same /tmp directory as the map files.

    For this tutorial, you will use the following configuration:


    robot: carter physics_engine: real-world localization: lidar waypoint_graph_generator: grid-map route_planner: onboard path_planner: grid-map omap_path: <path/to/your/omap.png> omap_cell_size: 0.05

    When creating this file, make sure to set the omap_path parameter to point to the occupancy map that you loaded onto the robot in step 1. If you used the recommended path in the previous step, the maps will be stored in the robot’s /tmp directory.

    To create this file, you may use any text editor. For example, to write the file with nano, you would use the following command:


    nano /tmp/navigation_stack_configuration.yaml

    Then, copy-paste the content from above and close nano again by pressing ctrl-x. If asked whether you want to save the modifications, type Y.


    If you don’t have nano installed, you can install it using sudo apt install nano

  4. Run the navigation Docker container. You need to mount the directory containing your custom maps and the directory containing the created config file to the Docker container.


    docker run -it --gpus all --rm --network=host --privileged \ -v /dev:/dev \ -v /sys:/sys \ -v /tmp:/tmp \<your_staging_area>/navigation_stack:isaac_2.0-aarch64 \ -c /tmp/navigation_stack_configuration.yaml


    The above command assumes that both the map and config files are in the /tmp folder; thus, /tmp is mounted with the -v /tmp:/tmp flag.


    Replace <your_staging_area> with your assigned NGC Registry staging area code.

    1. Ensure you are using the correct name of the created map in the command above.

    2. When prompted, enter your sudo password.

  5. You should hear the Carter beep. You are now ready to give the Carter robot permission to move autonomously.

  6. For the Carter robot to navigate autonomously, you need to provide it with a goal. To do so, open <ROBOT IP>:3000 on a web browser on the same local network. This webpage contains the Sight Visualization tool, which allows you to visualize the robot on the map and also set goal positions.


    Refer to the Isaac Sight documentation page for more details about using visualization, including a walkthrough video.

  7. Initially, you will see a map similar to the following:


    The robot is drawn with a blue square. Verify that the robot is drawn at the correct spot in the map where it is also physically located (i.e. localized correctly). If the robot is not localized correctly, you can use the localization widget to manually seed the localization.

    In the top left corner of the map, you will see a pose marker (red circle). This is used to specify the target location. You can specify the goal by dragging-and-dropping the goal marker to any position in the map.


    After updating the goal marker’s position, you should immediately see a route ( green line) being planned from the robot to the goal. Additionally, you will see a local path being planned (blue line) and a local trajectory (blue overlapping arrows).

  8. The robot is now ready to move autonomously. Press and hold the L1 button on the controller for the robot to move. While the robot is moving, you should also be able to see the visualization in sight being updated.


    The L1 button is a “deadman switch”, a precautionary measure to ensure the robot is monitored by a human operator. Do not tamper with the L1 button.


    Stay with the robot as the joystick controller is connected with the robot over Bluetooth.

  9. While the robot is navigating autonomously, you can release the L1 button at any time to immediately stop it. To continue with autonomous navigation, press the L1 and hold the button again. Additionally, you can use the joystick controller to override the commands from the autonomy stack and manually steer the robot.

Using Occupancy and Semantic Maps

The Navigation Stack in the previous application only uses occupancy maps for navigation. You may also want to use semantic maps to manually annotate regions that the robot should not drive through. This tutorial provides a configuration of the Navigation Stack that includes semantic maps in the navigation process.

You can follow the same instructions as before, but this time modify the config file from Step 2 to contain the following parameters:


robot: carter physics_engine: real-world localization: lidar waypoint_graph_generator: semantic-map route_planner: onboard path_planner: semantic-map omap_path: <path/to/my/omap.png> omap_cell_size: 0.05 semantic_map_path: <path/to/my/semantic/map.json>

Again, make sure to replace the values for omap_path and semantic_map_path with the the path where you stored the maps on your system. As with the previous application, you can open <ip-address>:3000 to see a visualization of the robot and set a goal position.

More Configurations of the Navigation Stack

All configuration options in the Navigation Stack can be changed individually. Refer to the Navigation Stack Configuration section for a complete list of the configuration options.

© Copyright 2018-2023, NVIDIA Corporation. Last updated on Oct 30, 2023.