Pod Replay

The Pod Replay app runs Nvblox on a POD file.

Run the app with:


docker run -it --gpus all --network=host \ -v <PATH_TO_YOUR_POD>:<PATH_TO_YOUR_POD> \ nvcr.io/nvstaging/isaac-amr/nvblox_pod_replay_app:<IMAGE_TAG> \ --pod-path <PATH_TO_YOUR_POD> \ --camera <CAMERA_TYPE> \ --depth <DEPTH_TYPE> \ --human-detection <WITH_HUMANS> \ --model-file-path <PATH_TO_ONNX_FILE> \ --engine-file-path <PATH_TO_ENGINE_FILE> \ --input-binding-names <INPUT_BINDING_NAMES> \ --output-binding-names <OUTPUT_BINDING_NAMES>

  • <PATH_TO_YOUR_POD> is the path to your POD file.

  • <IMAGE_TAG> refers to the Docker image tag (e.g. master-aarch64).

  • <CAMERA_TYPE> can be either realsense (for Realsense), hawk for Hawk.

  • <DEPTH_TYPE> can be either omitted for realsense, or ess or sgm for Hawk.

  • <WITH_HUMANS> is a bool to enable/disable human –human_detection

  • <PATH_TO_ONNX_FILE> path to the onnx file if human detection is enabled (optional)

  • <PATH_TO_ENGINE_FILE> path to the engine file if human detection is enabled (optional)

  • <INPUT_BINDING_NAMES> string of input binding names if human detection is enabled (optional)

  • <OUTPUT_BINDING_NAMES> string of output binding names if human detection is enabled (optional)

If the segmentation pipeline is enabled then all the optional parameters have to be set

NOTE: The app converts an onnx file to a engine file at startup. This takes around ~30s on a powerful GPU. The resulting engine file is stored at /tmp/ess_{HASH}.engine, and is reloaded the next time that the app runs. In a dockerized environment this doesn’t work because the filesystem does not persist between runs. If the ess_{HASH}.engine file exists on your host system, you can skip this step by mounting it to /tmp/ess_{HASH}.engine for example by adding –mount type=bind,source=/tmp/ess_{HASH}.engine,target=/tmp/ess_{HASH}.engine to the docker run command above.

The semantic segmentation app also converts onnx file to a engine file at startup if the engine file does not exist in the specified path. If it does then no conversion takes place and the engine is directly used for inference. Can be used in a dockerized container if the mapping of the local path is done correctly.

Replace PATH_TO_DIFFERENT_FILES with the path to your corresponding files. Download the required onnx weights file from - https://drive.google.com/drive/folders/1Yiq7B1nnqmEzzGojl203cz3_qWl4gCzh?usp=sharing This weight file corresponds to a shufflesemseg network with robotics fine tuned weight and for different dimensions. The first time the pipeline is launched the engine file will be automatically created from the onnx file. For further use the engine file will be directly used for inference. Currently the network can take in a list of different dimensions for the input image. The possible input dimensions can be seen in the name of the weights file. We recommend using 640x480. but the resizing is done in the inference subgraph so feel free to use it for any input dimension but the output will be according to the dimension taken in by the network.

To visualize the reconstruction, open localhost:3000 in your browser. You should be able to visualize the color, depth, semantics(if enabled), 3D mesh, and 2D ESDF slice.

For development, it is useful to run apps without generating Docker images:


dazel run //extensions/nvblox/apps/pod_replay:nvblox_pod_replay_app -- --pod-path <PATH_TO_YOUR_POD> --camera <CAMERA_TYPE> --depth <DEPTH_TYPE> --model-file-path <PATH_TO_ONNX_FILE> --engine-file-path <PATH_TO_ENGINE_FILE> --input-binding-names <INPUT_BINDING_NAMES> --output-binding-names <OUTPUT_BINDING_NAMES>

© Copyright 2018-2023, NVIDIA Corporation. Last updated on Oct 23, 2023.