NavSim

NavSim is an Isaac simulator for navigation based on Unity3D. It provides test environments to evaluate the performance of the Isaac navigation stack. It can also be used to generate procedurally generated and fully annotated training data for machine learning. Its features include interoperability between Isaac and Unity3D, emulation of sensor hardware and robot drive modesl, basic asset randomization, and scenario management.

Unity3D uses C# while Isaac is built with C++. Passing messages between these two domains requires marshalling data through a C API. The simulator publishes and receives messages on nodes that are created by the Isaac simulator application. The Isaac simulator application is a fully functional Isaac application that can adapt to different use cases.

IsaacNavSimMarshal.png

Download the NavSim binary from the Isaac Developer Downloads website and unzip it as packages/navsim/unity. This contains one executable navsim.x86_64 that includes three scenes with different Isaac applications as detailed below.

Scene Description
warehouse A small warehouse environment to test and demonstration the Isaac navigation stack. By default it uses Carter, but other robots and scenarios can be chosen as well.
rng_warehouse A randomized version of the warehouse scene. Objects are placed randomly, and lights and textures of the environment are randomized.
object_expo A simulation enviornment for training perception-based neural networks. It changes object and camera pose, lighting, and environment randomly. Sensor data is annotated with pixel-wise class and instance labels, and bounding boxes.

For example, to start the warehouse scene with the default application, run the following commands:

Copy
Copied!
            

bob@desktop:~/isaac$ cd packages/navsim/unity bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene warehouse

Use the command line argument --scene followed by the name of the scene to select which scene to run.

Once a scene is running, use the navsim_viewer_tcp app to visualize the camera data that is currently published by NavSim using WebSight. To start the viewer run the following command:

Copy
Copied!
            

bob@desktop:~/isaac$ bazel run packages/navsim/apps:navsim_viewer_tcp

Note that different numbers and types of cameras are enabled in different scenes. For example, the warehouse scene only has one color camera and one depth camera, while the rng_warehouse also has instance and label cameras. Only the enabled channels can be viewed in WebSight.

The warehouse scene is provided to test Carter navigation in a warehouse setting. First run the warehouse scene with the default application:

Copy
Copied!
            

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene warehouse

Second, run the application with the Carter navigation stack:

Copy
Copied!
            

bob@desktop:~/isaac$ bazel run //apps/navsim:navsim_navigate -- --more packages/navsim/maps/small_warehouse.json,packages/navsim/robots/carter.json

The navsim_navigate application is very similar to the Carter application for the real robot. Instead of the subgraph, which launches the hardware drivers, it uses the navsim subgraph, which communicates with the simulator via TCP sockets. The configuration files for the map of the simulated warehouse and for the robot used in simulation are specified as additional command line parameters.

The following shows a screen shot of a Unity top-down camera view in Isaac Sight when the simulation is running. In the top-down camera view, the goal pose is shown by a ghostly-green Carter, and the global and local plans are shown in red and green lines respectively. In Sight, the Map View windows show Carter’s localization using flatscan LIDAR, the Speed Profile window shows the commanded and observed differential base speeds, and the Color Camera window shows the first-person view of Carter.

warehouse_gameview.png

warehouse_sight.png

Available Channels

The following is a general list of data channels that are available from NavSim. Note that the list of available channels depend on agent, i.e., robot or sensor rig, that is used by the scene or scenario.

  • interface/output/flatscan: A 2D range scan. Available for all robots.
  • interface/output/color: A color image rendered by the main camera. Note that the stereo sensor rig considers the left camera to be the main camera and additionally provides the channel color_right for the image rendered by the right camera.
  • interface/output/depth: Depth information (in meters) for the main camera. The stereo sensor rig additionally provides the depth_right channel.
  • interface/output/segmentation: Pixel-wise class and instance labels for the main camera image.
  • interface/output/bounding_boxes: Bounding boxes for objects of interest computed based on the segmentation image. Boxes currently only cover the visible part of an object.
  • interface/output/imu_raw: Raw IMU from a simulated IMU. Note that IMU simulation is currently not very accurate and will be improved in the future.
  • interface/output/base_state: State updates, e.g., linear and angular speed, from the simulated differential base used by some of the robots. Currently only available in the warehouse scene.
  • interface/input/base_cmd: Commands used to drive the simulated differential base used by some of the robots. Currently only available in the warehouse scene.

Scenario Creation

The warehouse scene includes two different scenarios. Use the command line argument --scenario n, where n is 0 or 1, to select which scenario to run.

The default scenario (n=1) allows you to interactively change the scene or the goal in the NavSim window. The obstacles and goal can be dragged around using the mouse. In addition, pressing the R key creates randomly placed obstacles in the scene. These obstacles can also be moved with the mouse, or re-randomized by pressing R. Play with these interactive features to create a desired scenario, then launch the navsim_navigate application to see how Carter navigates in your customized scenario.

The first scenario (n=0) in this scene demonstrates how to load a scenario from a JSON file. The default scenario JSON file is shown below:

Copy
Copied!
            

{ "scene": "warehouse", "description": "Drive from garage door to back of warehouse, encounter an unexpected obstacle", "robots": [ { "prefab": "Carter_Wheeled", "name": "robot", "node": "navsim", "pose": [0.931374, 0, 0, 0.364064, 4.91, 0.85, 0.05] } ], "frames": [ { "name": "pose_as_goal", "pose": [0.707107, 0, 0, 0.707107, -8.12, -6, 0] } ], "obstacles": [ { "prefab": "Cylinder", "name": "obstacle in hallway", "pose": [1, 0, 0, 0, 0.13, -4.67, 1] } ] }

Here robots:prefab defines the name of the robot, robots:pose defines the starting pose of the robot, frames defines the goal pose, and obstacles defines the type and pose of the obstacles in this scenario. For robot, the supported prefabs include Carter_Wheeled, RectangleRobot, TriangleRobot, and CircleRobot. When you select a different robot, you must also use the corresponding configuration file when running the navigate app. For example, when using RectangleRobot, run the navigation application:

Copy
Copied!
            

bob@desktop:~/isaac$ bazel run //apps/navsim:navsim_navigate -- --more packages/navsim/maps/small_warehouse.json,packages/navsim/robots/rectangle_robot.json

Note that the robot configuration file packages/navsim/robots/carter.json is replaced with packages/navsim/robots/rectangle_robot.json. These files specify the different shapes of the robots and sensor placements that the navigation stack needs for accurate localization and path planning. For obstacles, the supported prefabs include Cube, Cylinder, Box03, Dolly, Pushcart, and TrashCan02.

You can create your own scenario JSON file and run the simulation with the new JSON file using --scenario 0 --scenarioFile AbsolutePathToYourJsonFile.

The object_expo scene is provided for generating object segmentation or object detection training data. This scene randomly places an object from a Unity AssetBundle in an environment with randomized materials. The application publishes the color and depth images, pixel-wise instance, and class label ground truth, as well as labeled ground-truth bounding boxes.

object.png

Run the scene with the default application with:

Copy
Copied!
            

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene object_expo

Once a scene is running, use the navsim_viewer_tcp app to view the training data. To start the viewer run the following command:

Copy
Copied!
            

bob@desktop:~/isaac$ bazel run packages/navsim/apps:navsim_viewer_tcp

View the published data at http://localhost:3000/ in the Color Camera and Label Camera windows.

Training YOLO with NavSim

See the Object Detection Pipeline page for an example application for training YOLO with NavSim.

Use the rng_warehouse scene to generate segmentation training data. This scene applies randomization to the materials of the wall and floor. It also provides teleportation of a camera group containing a color camera and segmentation camera. This helps you get multiple different views of the environment.

The scene provides you with a label for the ground called “floor” with the pixel value 1. It publishes the color camera image along with a segmentation image, where each pixel belonging to the ground class has a class label of 1 and all other pixels have a class label of 0.

Run the scene with the default application:

Copy
Copied!
            

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene rng_warehouse

For further information on using this scene for path segmentation, refer to the section on freespace segmentation.

navsim_segmentation.png

© Copyright 2019, NVIDIA Corporation. Last updated on Feb 1, 2023.