Step #1: Check Assets

The following examples will require using the system console of the GPU host. Click on the System Console link in the left menu of this page to open a web-based SSH session.

clara-holoscan-02.png

Let’s first view the assets needed for this lab. Within the Desktop VNC environment, click on “Activities” on the top left corner, and type to search “Terminal”.

clara-holoscan-03.png

In the terminal, view the content of /data/holoscan/assets with command tree /data/holoscan/assets, you should see the files as below:

clara-holoscan-04.png

The /data/holoscan/assets/launch_run_container.sh script launches the NGC runtime container where you can run the sample apps. The /data/holoscan/assets/launch_byom_run_container.sh script launches the same docker container with modifications to bring-your-own-model to the sample applications. The reason we provide three TensorRT engines specific to the T4 hardware in this lab is that, Holoscan inference applications use TensorRT models for inference. The models can be loaded directly from existing TensorRT engine files or from ONNX models which are converted to TensorRT engine files at runtime. The latter option can cause a delay when starting the application for the first time, so the launch script makes use of pre-generated TensorRT engine files as TensorRT engine files, are specific to the compute platform.

In the terminal, view the content of /data/holoscan/byom with command tree /data/holoscan/byom, you should the files as below:

clara-holoscan-05.png

This directory contains files needed for the bring-your-own-model process. Visit the resource Colonoscopy Sample App Data on NGC for more information on the files.

To access this documentation and copy/paste commands within the Desktop VNC environment, search “Firefox” in “Activities”, and enter “localhost” as the address.

clara-holoscan-06.png

© Copyright 2022-2023, NVIDIA. Last updated on Jun 9, 2023.