How can I access ``nvcr.io``?
nvcr.io
is NVIDIA’s Docker container registry. You can access it with the NGC credentials you
received from your NVIDIA representative. First follow these instructions to install Docker on your system. The
instructions mention multiple ways to install Docker. We recommend installing Docker using the apt
repository.
Please make sure to also follow the optional steps to run docker without sudo
from here.
Next follow the instructions here
to access nvcr.io
.
What is ``<your_staging_area>``?
<your_staging_area>
is the NGC Registry staging area code that has been assigned to you by your
NVIDIA representative. You need this code to access any Docker containers or other assets stored on NGC.
If you do not have an NGC staging area code, please contact your NVIDIA representative.
How do I Pair the Bluetooth Joystick with the Carter Robot?
The included controller comes pre-paired. In case the controller fails to pair, then you can use these instructions included in the Troubleshooting section to re-pair.
To pair the joystick with the robot, please put the controller in pairing mode. This can be done by pressing and holding down the two buttons shown in the image below until the controller lights start blinking:
Then connect to the robot and pair the controller to the robot.
ssh nvidia@<ROBOT IP> $ sudo bluetoothctl **[bluetooth]#** scan on // This should list the available devices Discovery started [CHG] Controller 90:E8:68:84:35:24 Discovering: yes [NEW] Device D4:20:B0:42:66:4F D4-20-B0-42-66-4F [NEW] Device D4:20: B0:42:5C:CC D4-20-B0-42-5C-CC [NEW] Device D4:20:B0:42:64:1A D4-20-B0-42-64-1A [NEW] Device 7C:66:EF:5D:36:B1 Wireless Controller [NEW] Device D4:20:B0:42:C6:80 D4-20-B0-42-C6-80 [NEW] Device D4:20:B0:42:5B:37 D4-20-B0-42-5B-37 **[bluetooth]#** scan off **[bluetooth]#** pair 7C:66:EF:5D:36:B1 //Upon successful pairing, it should ask you to type yes/no to trust the device. Type "yes". If not prompted **[bluetooth]#** trust 7C:66:EF:5D:36:B1 **[bluetooth]#** exit
Jetson Won’t Connect to the Monitor Unless Ethernet Is Connected
This is a known bug. We’d recommend that the user have the Carter robot connected to ethernet during the initial SW setup procedure in the documentation.
I Got Disconnected from Isaac Sight during Data Recording
You just got disconnected from the recorder app with the message “You have been disconnected: Maximum number of connections already reached”. What should you do?
The robot is designed to have only one connection at a time. So, it might be the case that someone else tried to use the same Carter robot as you.
To disconnect other users, right-click the NVIDIA logo in the corner, you can “Disconnect Others” and regain access to the UI
While recording, Is the Data Divided into Multiple POD Files?
Data recordings are split every 8 minutes to keep file sizes manageable. This also prevents losing large amounts of data due to file corruption. The maximum recording length can be overridden with the following command line flag:
--param=recorder.pod_recorder/pod_recorder/max_duration=<value>
that max_duration
is specified in seconds.
What’s the Difference between COMPLETED and Ready (in the Recorder App)?

“COMPLETED” means the file uploaded successfully.
“Ready” means the file is ready to be uploaded.
Does the Recorder App need to be running for data validation?
Once data has been collected and uploaded for data validation, the recorder app can be closed and Carter can be powered off or used for other tasks.
I am seeing a lot of data loss from the lidar
This can occur if there is not enough space left on the main drive. Check the remaining space with
df -h /dev/mmcblk0p1
and remove uncessary files if Use%
is at or above 99%
.
My robot is not going to the location I intend.
Note the default behavior is for the origin to be in the top left,
and for X to reference rows and Y to reference columns. Mission Control
by default works with world == map frame
. This can be adjusted in
the Waypoint Graph Generator configuration via translation and
rotation.
How do I perform per-service debugging?
Refer to the following local pages:
Mission Control: http://localhost:8050/api/v1/docs
Waypoint Graph Generator: http://localhost:8000/v1/docs
Mission Dispatch: http://localhost:5002/docs
Mission Database: http://localhost:5003/docs
The routes aren’t what I expect.
Load the Waypoint Graph Generator FastAPI docs (http://localhost:8000/v1/docs))
Use the
GET /graph/visualize
endpoint with themap_id
from yourdefaults.yaml
file.
cuOpt will then optimize the overall route based on nodes in the graph.
After aborting a mission, I can no longer submit missions to that robot.
If a mission is started and never completed, your mission/robot may end up in a unrecoverable state. Depending on the expiry, this may not resolve itself. You can either refer to the Mission Dispatch FastAPI docs (http://localhost:5002/docs) to delete a mission in progress, or purge the postgres instance with the following command:
docker container rm external-postgres-1
Can I turn on debug mode?
In bringup_isaac_cloud.yaml
, modify Mission Control
verbose mode by changing Debug mode
from INFO
to
DEBUG
. This can be helpful in debugging certain issues.
Navigational commands are out of bounds
A common issue is giving a target location that is a non-navigable surface (e.g. an X/Y in the middle of a wall). This will result in a mission failure.
I am unsure whether my robot is connected.
In the Mission Dispatch FastAPI page (http://localhost:5002/docs), in the robot section, select “GET /robot”, “Try it out”, and “Execute”. If you don’t see your robot there, either Mission Control did not create the robot because the name is different, or your robot is not communicating with the MQTT channel.
I encounter the error message ``”AuthenticationService disabled by empty vehicle token,”``
Ensure that you set the vehicle_token
. Additionally, if you come across
the error “security requirements failed: error getting JWT from Authorization
header: InvalidArgument,” verify that your vehicle_token
is correct.
I encounter ab ``Address in Use`` Error with Mosquitto Daemon.
If you encounter the error message tutorials-mosquitto-1 Error: Address in use
, it is likely
because another Mosquitto daemon is already running. To resolve this issue, stop the
conflicting daemon using the following commands:
ps aux | grep mosquitto
sudo systemctl stop mosquitto # mosquitto service name might be different
I encounter the error message ``non-navigable surface``
If you come across this error message, follow these steps to resolve it:
Verify the resolution in the
defaults.yaml
file to ensure it is correct. The resolution is measured in pixels per meter and should align with the occupancy map you are using.Make sure that the mission’s designated point is not positioned on an obstruction. For occupancy maps, ensure it is within a non-occupied area, and for semantic maps, ensure it falls on a navigable surface.
Double-check the accuracy of the translation and rotation values in the
defaults.yaml
file to ensure they are set correctly. The translation and rotation represent the transformation from the map frame to the world frame.
How do I use the Navigation Stack in a custom map?
To use the Navigation Stack in a custom map, you need to pass the map information to the navigation stack using the following CLI args:
--omap-path=<path/to/occupancy-grid-map>
--omap-cell-size=<cell-size-of-omap-in-meters>
--semantic-map-path=<path/to/semantic-map>
In a YAML configuration file, the same options would be as follows:
omap_path: <path/to/occupancy-grid-map>
omap_cell_size: <cell-size-of-omap-in-meters>
semantic_map_path: <path/to/semantic-map>