Clara Holoscan Deploy 0.7.4

9.8. Clara AI Hippocampus Segmentation Operator

This example is a containerized AI inference application, developed for use as one of the operators in the Clara Deploy pipelines. The application is built on the base AI application container, which provides the application framework to deploy Clara Train TLT trained models. The same execution configuration file, set of transform functions, and scanning window inference logic are used; however, inference is performed on the TensorRT Inference Server.

This application, in the form of a Docker container, expects an input folder (/input by default), which can be mapped to the host volume when the Docker container is started. This folder must contain a volume image file in the Nifti or MetaImage format. Furthermore, the volume image must be constructed from a single series of a DICOM study, typically an axial series with the data type of the original primary.

This application saves the segmentation results to an output folder, /output by default, which can also be mapped to a folder on the host volume. After the successful completion of the application, a segmentation volume image of format MetaImage is saved in the output folder. The name of the output file is the same as that of the input file due to certain limitations of the downstream consumer.

The example container also publishes data for the Clara Deploy Render Server to the /publish folder by default. The original volume image, segmented volume image, and metadata file, along with a render configuration file, are saved in this folder.

The NVIDIA® Clara Train Transfer Learning Toolkit (TLT) for Medical Imaging provides pre-trained models unique to medical imaging, with additional capabilities such as integration with the AI-assisted Annotation SDK for speeding up annotation of medical images. This allows access to AI-assisted labeling [Reference].

The application uses the segmentation_mri_hippocampus model, which uses the tensorflow_graphdef platform. The input tensor is of shape 96x96x96 with a single channel. The output is of the same shape with three channels.

The application uses the segmentation_mri_hippocampus model provided by the NVIDIA Clara Train TLT for hippocampus tumor segmentation, which is converted from the TensorFlow Checkpoint model to tensorflow_graphdef using the TLT model export tool. The input tensor is of shape 96x96x96 with a single channel. The output is of the same shape with three channels.

You can download the model using the following commands:

Copy
Copied!
            

# Download NGC Catalog CLI wget https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip ngccli_cat_linux.zip && rm ngccli_cat_linux.zip ngc.md5 && chmod u+x ngc # Configure API key (Refer to https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html#generating-api-key) ./ngc config set # Download the model ./ngc registry model download-version nvidia/med/segmentation_ct_colon_tumor:1

Note: NGC Catalog CLI is needed to download models without Clara Train SDK: Please follow the NGC documentation to configure the CLI API key.

Detailed model information can be found at (downloaded model folder)/docs/Readme.md.

This application also uses the same transforms library and configuration file for the validation/inference pipeline during TLT model training. The key model attributes (e.g. the model name and network input dimensions), are saved in the config_inference.json file and consumed by the application at runtime.

9.8.4.1. NVIDIA TensorRT Inference Server (TRTIS)

This application performs inference on the NVIDIA TensorRT Inference Server (TRTIS), which provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model managed by the server.

The directories in the container are shown below. The application code under /App is from the base container, except for the files in the config directory, which is model specific. The sdk_dist directory contains the Clara Train TLT transforms library. The medical directory contains compiled modules from Clara Train TLT, and the writer directory contains a specialized writer that saves the segmentation result to a volume image file as MetaImage.

Copy
Copied!
            

/ai ├── sdk_dist/ └── app_base_inference ├── config │   ├── config_render.json │   ├── config_inference.json │   ├── __init__.py │   └── model_config.json ├── public │   └── docs │   └── README.md ├── writers │ ├── __init__.py | ├── classification_result_writer.py │ ├── mhd_writer.py │ └── writer.py ├── app.py ├── Dockerfile ├── executor.py ├── logging_config.json ├── main.py ├── medical └── requirements.txt /input /output /publish

If you want to see the internals of the container and manually run the application, follow these steps:

  1. Start the container in interactive mode. See the next section on how to run the container, and replace the docker run command with the following:

    Copy
    Copied!
                

    docker run --entrypoint /bin/bash

  2. Once in the Docker terminal, ensure the current directory is /app.

  3. Execute the following command:

    Copy
    Copied!
                

    python ./app_base_inference/main.py

  4. When finished, type exit.

9.8.7.1. Prerequisites

  1. Ensure the Docker image of TRTIS has been imported into the local Docker repository with the following command:

    Copy
    Copied!
                

    docker images

  2. Look for the image name TRTIS and the correct tag for the release (e.g. 19.08-py3).

  3. Download both the input dataset and the trained model from the MODEL SCRIPTS section for this container on NGC, following the steps in the Setup section.

9.8.7.2. Step 1

Change to your working directory( e.g. test_hippocampus).

9.8.7.3. Step 2

Create, if they do not exist, the following directories under your working directory:

  • input containing the input image file

  • output for the segmentation output

  • publish for publishing data for the Render Server

  • logs for the log files

  • models containing models copied from the segmentation_mri_hippocampus_v1 folder

9.8.7.4. Step 3

In your working directory, create a shell script( e.g. run_hippocampus.sh or another name if you prefer), copy the sample content below, and save it.

Copy
Copied!
            

#!/bin/bash # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. # Define the name of the app (aka operator), assumed the same as the project folder name APP_NAME="app_hippocampus" # Define the TenseorRT Inference Server Docker image, which will be used for testing # Use either local repo or NVIDIA repo TRTIS_IMAGE="nvcr.io/nvidia/tensorrtserver:19.08-py3" # Launch the container with the following environment variables # to provide runtime information. export NVIDIA_CLARA_TRTISURI="localhost:8000" # Define the model name for use when launching TRTIS with only the specific model MODEL_NAME="segmentation_mri_hippocampus_v1" # Create a Docker network so that containers can communicate on this network NETWORK_NAME="container-demo" # Create network docker network create ${NETWORK_NAME} # Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME} # (localhost:8000 will be used) nvidia-docker run --name trtis --network ${NETWORK_NAME} -d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \ -p 8000:8000 \ -v $(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME} ${TRTIS_IMAGE} \ trtserver --model-store=/models # Wait until TRTIS is ready trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis) echo -n "Wait until TRTIS${trtis_local_uri}is ready..." while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do sleep 1 echo -n "." done echo "done" export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000" # Run ${APP_NAME} container. # Launch the app container with the following environment variables internally, # to provide input/output path information. docker run --name ${APP_NAME} --network ${NETWORK_NAME} -it --rm \ -v $(pwd)/input:/input \ -v $(pwd)/output:/output \ -v $(pwd)/logs:/logs \ -v $(pwd)/publish:/publish \ -e NVIDIA_CLARA_TRTISURI \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ ${APP_NAME} echo "${APP_NAME}is done." # Stop TRTIS container echo "Stopping TRTIS" docker stop trtis > /dev/null # Remove network docker network remove ${NETWORK_NAME} > /dev/null


9.8.7.5. Step 4

Execute the script as shown below and wait for the application container to finish:

Copy
Copied!
            

./run_hippocampus.sh


9.8.7.6. Step 5

Check for the following output files:

  1. Segmentation results in the output directory:

    • One file of the same name as your input file, with extension .mhd

    • One file of the same name, with extension .raw

  2. Published data in the publish directory:

    • Original volume image (image.mhd and image.raw)

    • Segmentation volume image (image.seg.mhd and image.seg.raw)

    • Render Server config file (config_render.json)

    • Metadata file describing the other files (config.meta)

© Copyright 2018-2021, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.