You will need the following before starting navigation:
Carter V2.3 Robot with 2TB Storage.
PC (with Browser) connected to the same WiFi Network as the robot
Joystick connected to the robot
Occupancy Map obtained from Mapping workflow
Internet Access.
The Carter robot will be delivered with the Navigation App Container loaded.
Verifying the Map
You should have received three map files from NVIDIA:
The 2D occupancy map (.png)
The 3D occupancy map (.ply)
- The semantic map file (.json), which will be loaded/viewed in the
semantic labeling tool
The 2D Occupancy map (.png) and semantic labels (.json) will be used when customizing the map for the Navigation app.
You can visually check if the map is correct by opening the .png file in a web browser. Make sure it matches the layout of your environment.
You can also visually inspect the 3D Voxel map (.ply) using any freely available tool.
Using only Occupancy Maps
- Make sure the maps that you received are on the robot. We recommend
storing them in `/tmp`. If your map is not yet on the robot you can send it from your laptop to the robot using scp:
scp <path/to/map/on/laptop> nvidia@<ROBOT IP>:/tmp/
This will copy the map from your laptop onto the robot’s /tmp directory.
- SSH into carter and run the navigation docker container. It is
important that the directory containing your custom maps is mounted to the docker container. In the following command we assume that your custom map lies in /tmp, thus we mount /tmp with the -v /tmp:/tmp flag.
ssh nvidia@<ROBOT IP> docker run -it --gpus all --rm --network=host --privileged \ -v /dev:/dev \ -v /sys:/sys \ -v /tmp:/tmp \ nvcr.io/mfql6xnjuziw/carter_demo_occupancy_environment_custom:release-isaac2.0-ea-rc2-aarch64 \ --param=occupancy_grid_map/image_loader/filename=/tmp/my_custom_map.png \ --param=occupancy_grid_map/map/cell_size=0.05
NoteReplace <yourstagingarea> with your assigned NGC Registry staging area.
Ensure you are using the correct name of the created map in the command above.
When prompted, enter your sudo password.
- You should hear the Carter beep. You are now ready to give the Carter
robot permission to move autonomously.
- For the Carter robot to navigate autonomously, you need to provide it
with a goal. To do so, open <ROBOT IP>:3000 on a web browser on the same local network. This webpage contains the Sight Visualization tool, which allows you to visualize the robot on the map and also set goal positions.
Initially, you will see a map similar to the following:
The robot is drawn with a blue square. Verify that the robot is drawn at the correct spot in the map, where it is also physically located (i.e. localized correctly).
In the top left corner of the map, you will see a pose marker (red circle). This is used to specify the target location. You can specify the goal by dragging-and-dropping the goal marker to any position in the map.
After updating the goal marker’s position, you should immediately see a route ( green line) being planned from the robot to the goal. Additionally, you will see a local path being planned (blue line) and a local trajectory (blue overlapping arrows).
- The robot is now ready to move autonomously. Press and hold the L1
button on the controller for the robot to move. While the robot is moving, you should also be able to see the visualization in sight being updated.
NoteThe L1 button is a “deadman switch”, a precautionary measure to ensure the robot is monitored by a human operator. Do not tamper with the L1 button.
NoteStay with the robot as the Joystick is connected with the robot over Bluetooth.
- While the robot is navigating autonomously, you can release the L1
button at any time to immediately stop it. To continue with autonomous navigation, press the L1 and hold the button again. Additionally, you can use the joysticks to override the commands from the autonomy stack and manually steer the robot.
Using Occupancy and Semantic Maps
The Navigation Stack in the previous application only uses occupancy maps for navigation. You may also want to use semantic maps to manually annotate regions that the robot should not drive through. This tutorial provides a configuration of the navigation stack that includes semantic maps in the navigation process.
You can follow exactly the same instructions as before. But this time replace the command to launch the docker container from step 2 above with the following command:
docker run -it --gpus all --rm --network=host --privileged **\\**
-v /dev:/dev **\\**
-v /sys:/sys **\\**
-v /tmp:/tmp **\\**
nvcr.io/mfql6xnjuziw/carter_demo_semantic_environment_custom:release-isaac2.0-ea-rc2-aarch64
\\
--param=occupancy_grid_map/image_loader/filename=/tmp/my_custom_occupancy_map.png
\\
--param=occupancy_grid_map/map/cell_size=0.05
\\
--param=semantic_map/map/map_data_path=/tmp/my_custom_semantic_map.json
As with the previous application, you can open <ip-address>:3000
to see
a visualization of the robot and set a goal position.
Using a Default Empty Map
In case you don’t yet have a custom map for your environment, we also provide a default “empty” map that can be used to test out the navigation stack. To use this map you will need an empty with a ~5m x ~5m border. A physical border is required and has to be visible to the robot’s lidar sensor, ie. the border should have a height of ~ 50cm. If this border is not present the robot will not be able to localize and you will not be able to try out the navigation stack.
Place your carter robot in the middle of the 5m x 5m area and then start the navigation stack with the steps from above. But this time use the following command in step 2:
docker run -it --gpus all --rm --network=host --privileged **\\**
-v /dev:/dev **\\**
-v /sys:/sys **\\**
-v /tmp:/tmp **\\**
nvcr.io/mfql6xnjuziw/carter_demo_occupancy_environment_custom:release-isaac2.0-ea-rc2-aarch64
\\
--param=occupancy_grid_map/image_loader/filename=/app/apps/amr/navigation_stack/carter_demo_occupancy_environment_custom.runfiles/com_nvidia_isaac_sdk/apps/assets/maps/empty.png
\\
--param=occupancy_grid_map/map/cell_size=0.05