Object Detection

This article explains the steps involved in running the YOLO object detection pipeline for a new set of objects using Isaac Sim and SDK. Specifically, there are two Isaac SDK apps - training and testing and the coming sections will elaborate the necessary changes required before/after running the apps.

Object Classes

Isaac Sim provides various assets that can be grouped as a class and used to train the object detection pipleine.

The various class definitions are added to the random_meshes list in apps/samples/yolo/bridge_config/carter_rgb_detection.config.json file. The assets for each class can be referred to as a list or directed to a folder using the mesh_list or mesh_dirs respectively within the mesh_component.

An example for defining a Barel class in apps/samples/yolo/bridge_config/carter_rgb_detection.config.json file is provided below.

    "class": "Barel",
    "mesh_component": {
    "mesh_list": ["/Game/Warehouse/Meshes/Props/SM_BarelPlastic_A_01",
    "should_randomize": true,
    "randomization_duration_interval": [0.5, 2.0]
    "movement_component": {
    "should_randomize": true,
    "x_range": [-25.8, 4.8],
    "y_range": [60.2, 65.2],
    "z_range": [0.0, 0.1],
    "should_teleport": true,
    "check_collision": true,
    "randomization_duration_interval": [0.25, 0.5]
    "should_randomize": true,
    "random_cone_half_angle": 45.0,
    "yaw_range": [-180, 180],
    "randomization_duration_interval": [0.125, 0.25]
    "should_randomize": false

Users can also download assets from Unreal Engine Marketplace and use them for defining their class.

Run Isaac Sim

Start Isaac Sim with the following command:

./Engine/Binaries/Linux/UE4Editor IsaacSimProject CarterWarehouse_P -vulkan --isaac_sim_config_json="IsaacSDK/apps/samples/yolo/bridge_config/carter_rgb_detection.json"

Please ensure that the absolute paths are filled in for the config and graph files mentioned in apps/samples/yolo/bridge_config/carter_rgb_detection.json

Press Play after Isaac Sim loads.


Follow the procedures in Object Detection Pipeline of Isaac SDK Developer Guide for detailed information about the pipeline and to run the sample training and inference apps. This article discusses the changes that needs to be made in order to run using a new set of object classes.

Run SDK Training App

Follow the below steps before and after running the yolo training app.

Updates before training

Modify the following sections in the config file:

  • Class names: List new class names in apps/samples/yolo/keras-yolo3/model_data/object_classes_ue4.txt. The class names should match all the classes mentioned in class section under random_meshes list of apps/samples/yolo/bridge_config/carter_rgb_detection.config.json

  • Bridge Config: Update section classes in bounding_box_settings of apps/samples/yolo/bridge_config/carter_rgb_detection.config.json

    "bounding_box_settings": {
        "all_bounding_boxes": false,
        "occlusion_check": true,
        "occlusion_threshold" : 0.8,
        "classes": [
            "name": "Barel"
            "name": "Bottle"
            "name": "CardBox"
            "name": "character"
  • Training App: Update section class_names in detection_encoder of apps/samples/yolo/yolo_training_ue4.app.json

    "detection_encoder": {
      "isaac.ml.DetectionEncoder": {
        "class_names": ["Barel", "Bottle", "CardBox", "character"],
        "area_threshold": 100
  • After that run the training app as bazel run //apps/samples/yolo:yolo_training_ue4

Updates after training

Next step is to export the weights from the trained model and convert them into darknet format. For detailed instructions to update the network file, refer to Isaac SDK documentation.

  • Export Model:
python3 apps/samples/yolo/keras-yolo3/export_model.py [weights_path] [anchors_path] [classes_path] [output_folder]
  • Convert to Darknet format:
python3 keras_to_darknet.py -c [config_file] -i [keras_weights_file] -o [out_file] -n [num_classes]

Run SDK Inference App

Follow the below steps before running the inference app.

Updates before inference

  • Inference App: Update filename to test in section color_filename under feeder of apps/samples/yolo/yolo_tensorrt.app.json
"feeder": {
  "image_feeder": {
    "color_filename": "PATH_TO_IMAGE",
    "tick_period": "1Hz",
    "focal_length": [100, 100],
    "optical_center": [500, 500],
    "distortion_coefficients": [0.01, 0.01, 0.01, 0.01, 0.01]
  • Update list of object class in section labels_file_path under detection_decoder of apps/samples/yolo/yolo_tensorrt.app.json
"detection_decoder": {
  "isaac.ml.DetectionDecoder": {
    "labels_file_path" : "PATH_TO_OBJECT_CLASSES",
    "nms_threshold" : 0.6,
    "confidence_threshold" : 0.6
  • Update section weights_file_path, config_file_path and num_classes under yolo_tensorrt_inference of apps/samples/yolo/yolo_tensorrt.app.json
"yolo_tensorrt_inference": {
  "isaac.yolo.YoloTensorRTInference": {
    "yolo_config_json" : {
      "yolo_dimensions": [416, 416],
      "weights_file_path": "PATH_TO_WEIGHT_FILE(.weights)",
      "config_file_path": "PATH_TO_CONFIG_FILE(.cfg)",
      "tensorrt_folder_path": "/tmp/",
      "num_classes": 6,
      "network_type": "yolov3"
  • Then run the inference app on your host machine as follows:
bazel build ...
bazel run apps/samples/yolo/yolo_tensorrt_inference