Visualization with Isaac Sight
This section explains how to inspect the navigation stack, what is currently happening on the robot,
with Isaac Sight. Sight uses an Isaac node that runs a web service that can be connected to on the
device. Start Isaac Sight by navigating to
http://localhost:3000 in the Chrome web browser.
(Other browsers may work but are not officially supported in this release.)
When you open Isaac Sight you will see a view similar to the following:
In the middle section you can see multiple windows with 2D drawings which visualize various aspects of the navigation stack. Some of these drawings appear by default each time the flatsim, navsim_navigate, or carter applications are started. Drawings and plots are composed from various channels with visualization data published by the robotics applications.
To the left is a list of available channels with visualization data. These are directly published inside codelets of the navigation stack. A typical robotic application can contain hundreds of channels with visualization data. Channels can be disabled using the checkboxes next to their name. For a disabled channel the visualization data is no longer sent from the application to the frontend and the data will not longer be updated in the frontend. It is still visible to allow inspection of already received data.
Use the channel name to track down where the data is published. For example, the channel
with the name
is published by a node called local_map inside the application flatsim, specifically by a
component inside this node called
isaac.navigation.LocalMap (this is the default name for a
component with type
isaac::navigation::LocalMap. This component publishes this data under
the label local_map, the last section in the channel name.
The Map view shows high-level information about the robot in its environment. Use it to get a quick overview of what is currently going on. Visualization when running the navigation stack in simulation using the flatsim or navsim_navigate applications is mostly identical to real robot using the carter applications. When navsim_navigate or the carter applications are used, a 3D point cloud visualization is also available.
- Background: The global view shows a map of the environment as a black/grey/white occupancy grid map. Black cells indicate that something is blocking the way of the robot there, while white cells indicate that the path is free. The grey area indicates that the robot does not know if the way is clear or blocked.
- Robot pose estimate: The estimated pose of the robot is shown as a blue circle with a little notch. The localizer uses a multi-modal hypothesis estimator and thus multiple robot pose indicators that slightly differ in pose may be displayed.
- Target and path: The current target point and the path that the global planner computes are displayed in red and blue. The last position from which the global plan was computed is also displayed.
- Current measurement: The currently measured laser beams are visualized in light green. The endpoints where laser beams hit an obstacle are visualized as red circles. The range scan it shown from the perspective of the current best estimate of the robot pose. If the robot is localized well the beam endpoints match with blocked cells in the map.
This view shows obstacles around the robot. It can be used to analyse why the robot might not be moving, or is unable to reach the desired target.
- Background: The background shows obstacles around the robot in a dynamic grid map. The grid updated continuously from current sensor measurements.
- Robot pose: As the grid map is always centered around the robot, the robot appears at a fixed position in the view in the upper half. The forward direction of the robot is downwards in this view.
- Current plans: Both the local plan from the trajectory planner and the global plan from the graph-based planner are shown. The robot attempts to follow the plan from the trajectory generator.