Isaac Sim (NavSim with Unity 3D)

NavSim is an Isaac simulator for navigation based on Unity3D. It provides test environments to evaluate the performance of the Isaac navigation stack. It can also be used to generate procedurally generated and fully annotated training data for machine learning. Its features include interoperability between Isaac and Unity3D, emulation of sensor hardware and robot drive modesl, basic asset randomization, and scenario management.

Unity3D uses C# while Isaac is built with C++. Passing messages between these two domains requires marshalling data through a C API. The simulator publishes and receives messages on nodes that are created by the Isaac simulator application. The Isaac simulator application is a fully functional Isaac application that can adapt to different use cases.

../../../_images/IsaacNavSimMarshal.png

Running NavSim

Download the NavSim binary from the Isaac Developer Downloads website and unzip it as packages/navsim/unity. This contains one executable navsim.x86_64 that includes three scenes with different Isaac applications as detailed below.

Scene Description
warehouse A small warehouse environment to test and demonstration the Isaac navigation stack. By default it uses Carter, but other robots and scenarios can be chosen as well.
rng_warehouse A randomized version of the warehouse scene. Objects are placed randomly, and lights and textures of the environment are randomized.
object_expo A simulation enviornment for training perception-based neural networks. It changes object and camera pose, lighting, and environment randomly. Sensor data is annotated with pixel-wise class and instance labels, and bounding boxes.

For example, to start the warehouse scene with the default application, run the following commands:

bob@desktop:~/isaac$ cd packages/navsim/unity
bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene warehouse

Use the command line argument --scene followed by the name of the scene to select which scene to run.

Once a scene is running, use the navsim_viewer_tcp app to visualize the camera data that is currently published by NavSim using WebSight. To start the viewer run the following command:

bob@desktop:~/isaac$ bazel run packages/navsim/apps:navsim_viewer_tcp

Note that different numbers and types of cameras are enabled in different scenes. For example, the warehouse scene only has one color camera and one depth camera, while the rng_warehouse also has instance and label cameras. Only the enabled channels can be viewed in WebSight.

Warehouse Scene Navigation

The warehouse scene is provided to test Carter navigation in a warehouse setting. First run the warehouse scene with the default application:

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene warehouse

Second, run the application with the Carter navigation stack:

bob@desktop:~/isaac$ bazel run //apps/carter/navsim:navsim_navigate -- --more packages/navsim/maps/warehouse.json,packages/navsim/robots/carter.json

The navsim_navigate application is very similar to the Carter application for the real robot. Instead of the subgraph, which launches the hardware drivers, it uses the navsim subgraph, which communicates with the simulator via TCP sockets. The configuration files for the map of the simulated warehouse and for the robot used in simulation are specified as additional command line parameters.

The following shows a screen shot of a Unity top-down camera view in Isaac Sight when the simulation is running. In the top-down camera view, the goal pose is shown by a ghostly-green Carter, and the global and local plans are shown in red and green lines respectively. In Sight, the Map View windows show Carter’s localization using flatscan LIDAR, the Speed Profile window shows the commanded and observed differential base speeds, and the Color Camera window shows the first-person view of Carter.

../../../_images/warehouse_gameview.png ../../../_images/warehouse_sight.png

Available Channels

The following is a general list of data channels that are available from NavSim. Note that the list of available channels depend on agent, i.e., robot or sensor rig, that is used by the scene or scenario.

  • interface/output/flatscan: A 2D range scan. Available for all robots.
  • interface/output/color: A color image rendered by the main camera. Note that the stereo sensor rig considers the left camera to be the main camera and additionally provides the channel color_right for the image rendered by the right camera.
  • interface/output/depth: Depth information (in meters) for the main camera. The stereo sensor rig additionally provides the depth_right channel.
  • interface/output/segmentation: Pixel-wise class and instance labels for the main camera image.
  • interface/output/bounding_boxes: Bounding boxes for objects of interest computed based on the segmentation image. Boxes currently only cover the visible part of an object.
  • interface/output/imu_raw: Raw IMU from a simulated IMU. Note that IMU simulation is currently not very accurate and will be improved in the future.
  • interface/output/base_state: State updates, e.g., linear and angular speed, from the simulated differential base used by some of the robots. Currently only available in the warehouse scene.
  • interface/input/base_cmd: Commands used to drive the simulated differential base used by some of the robots. Currently only available in the warehouse scene.

Scenario Creation

The warehouse scene includes two different scenarios. Use the command line argument --scenario n, where n is 0 or 1, to select which scenario to run.

The default scenario (n=1) allows you to interactively change the scene or the goal in the NavSim window. The obstacles and goal can be dragged around using the mouse. In addition, pressing the R key creates randomly placed obstacles in the scene. These obstacles can also be moved with the mouse, or re-randomized by pressing R. Play with these interactive features to create a desired scenario, then launch the navsim_navigate application to see how Carter navigates in your customized scenario.

The first scenario (n=0) in this scene demonstrates how to load a scenario from a JSON file. The default scenario JSON file is shown below:

{
  "scene": "warehouse",
  "description": "Drive from garage door to back of warehouse, encounter an unexpected obstacle",
  "robots": [
    {
      "prefab": "Carter_Wheeled",
      "name": "robot",
      "node": "navsim",
      "pose": [0.931374, 0, 0, 0.364064, 4.91, 0.85, 0.05]
    }
  ],
  "frames": [
    {
      "name": "pose_as_goal",
      "pose": [0.707107, 0, 0, 0.707107, -8.12, -6, 0]
    }
  ],
  "obstacles": [
    {
      "prefab": "Cylinder",
      "name": "obstacle in hallway",
      "pose": [1, 0, 0, 0, 0.13, -4.67, 1]
    }
  ]
}

Here robots:prefab defines the name of the robot, robots:pose defines the starting pose of the robot, frames defines the goal pose, and obstacles defines the type and pose of the obstacles in this scenario. For robot, the supported prefabs include Carter_Wheeled, RectangleRobot, TriangleRobot, and CircleRobot. When you select a different robot, you must also use the corresponding configuration file when running the navigate app. For example, when using RectangleRobot, run the navigation application:

bob@desktop:~/isaac$ bazel run //apps/carter/navsim:navsim_navigate -- --more
packages/navsim/maps/warehouse.json,packages/navsim/robots/rectangle_robot.json

Note that the robot configuration file packages/navsim/robots/carter.json is replaced with packages/navsim/robots/rectangle_robot.json. These files specify the different shapes of the robots and sensor placements that the navigation stack needs for accurate localization and path planning. For obstacles, the supported prefabs include Cube, Cylinder, Box03, Dolly, Pushcart, and TrashCan02.

You can create your own scenario JSON file and run the simulation with the new JSON file using --scenario 0 --scenarioFile AbsolutePathToYourJsonFile.

Object Detection Training Data

The object_expo scene is provided for generating object segmentation or object detection training data. This scene randomly places an object from a Unity AssetBundle in an environment with randomized materials. The application publishes the color and depth images, pixel-wise instance, and class label ground truth, as well as labeled ground-truth bounding boxes.

../../../_images/object.png

Run the scene with the default application with:

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene object_expo

Once a scene is running, use the navsim_viewer_tcp app to view the training data. To start the viewer run the following command:

bob@desktop:~/isaac$ bazel run packages/navsim/apps:navsim_viewer_tcp

View the published data at http://localhost:3000/ in the Color Camera and Label Camera windows.

Training YOLO with NavSim

An example application for training YOLO with NavSim is provided in apps/samples/yolo/yolo_training_navsim.app.json. The default YOLO training application is configured for the UE4 simulator, but running the same training pipeline for NavSim is simple. A few modifications to the apps/samples/yolo/keras-yolo3/configs/isaac_object_detection.json config file are necessary to run the training sample with the NavSim object_expo scene:

  1. Change the app_filename parameter to apps/samples/yolo/yolo_training_navsim.app.json.
  2. Change the classes_path parameter to apps/samples/yolo/keras-yolo3/model_data/object_classes_navsim.txt.

You can see the data used for training at http://localhost:3001/. When training, ignore warning messages about skipping loading of weights for layers.

The log directory can be changed in /apps/samples/yolo/keras-yolo3/configs/issac_object_expo.json. Run tensorboard to see training progress by running tensorboard --logdir ~/yolo3_logs.

Training with Your Own AssetBundle

The default scene uses objects from the AssetBundle in navsim_Data/StreamingAssets/AssetBundles/warehouseobjects. You can run the scene with a different AssetBundle using the command line argument --assetBundle, followed by the path to your own AssetBundle. The path can be relative to navsim_Data/StreamingAssets or absolute. For example, there is another AssetBundle in navsim_Data/StreamingAssets named owenobjects, and you can generate training data using this AssetBundle instead with:

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene object_expo
--assetBundle AssetBundles/owenobjects

The asset randomizer draws from all the Prefabs in the AssetBundle, then uses the name of each Prefab as the class label. To train with your own models, follow the procedures in Unity documentation to create an AssetBundle with all the Prefabs to train on, and make sure their names match the desired class labels. You can optionally provide a json file using command line argument --labels to override the labels for certain Prefabs. For example, create a labels .json file in packages/navsim/unity/navsim_Data/StreamingAssets folder with the following contain:

{
  "PalletWood01": "wood_pallet",
  "PalletWood02": "wood_pallet",
  "PalletPlastic": "plastic_pallet"
}

Then run the object_expo scene:

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene object_expo
--assetBundle AssetBundles/owenobjects --labels labels.json

You can see in Sight that both PalletWood01 and PalletWood02 are labeled as wood_pallet, and the PalletPlastic is labeled as plastic_pallet.

When training, make sure to update apps/samples/yolo/keras-yolo3/model_data/object_classes_navsim.txt and DetectionEncoder configuration in apps/samples/yolo/yolo_training_navsim.app.json with the correct object labels from the new AssetBundle.

Segmentation Training Data

Use the rng_warehouse scene to generate segmentation training data. This scene applies randomization to the materials of the wall and floor. It also provides teleportation of a camera group containing a color camera and segmentation camera. This helps you get multiple different views of the environment.

The scene provides you with a label for the ground called “floor” with the pixel value 1. It publishes the color camera image along with a segmentation image, where each pixel belonging to the ground class has a class label of 1 and all other pixels have a class label of 0.

Run the scene with the default application:

bob@desktop:~/isaac/packages/navsim/unity$ ./navsim.x86_64 --scene rng_warehouse

For further information on using this scene for path segmentation, refer to the section on freespace segmentation.

../../../_images/navsim_segmentation.png