Clara Holoscan Deploy 0.8.1 EA

10.28. Clara Deploy AI COVID-19 Classification Operator

CAUTION: This is NOT for diagnostics use.

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Ansible page to install the Clara Deploy SDK.

This inference pipeline was developed by NVIDIA. It is based on a segmentation and classification model developed by NVIDIA researchers in conjunction with the NIH. The Software is for Research Use Only. Software’s recommendation should not be solely or primarily relied upon to diagnose or treat COVID-19 by a Healthcare Professional. This research use only software has not been cleared or approved by FDA or any regulatory agency

This example is a containerized AI inference application, developed for use as one of the operators in the Clara Deploy pipelines. This application uses the original image from a lung CT scan and a segmented lung image, both in NIfTI or MetaImage format, to infer the presence of COVID-19. The application is built on the Clara Deploy Python base container, which provides the interfaces with Clara Deploy SDK. Inference is performed on the NVIDIA Triton Inference Server (Triton), formerly known as TensorRT Inference Server (TRTIS).

The application, in the form of a Docker container, expects an input folder (/input by default), which needs to be mapped to a folder on the host volume when the Docker container is started. This folder must contain a volume image file of the original lung CT scan in the NIfTI or MetaImage format. The volume image must also be constructed from a single series of a DICOM study, typically an axial series with the data type of the original primary.

If there are multiple images in the input folder, one of them will be selected in arbitrary order. Beginning with Release 0.8.1, users can use Clara Deploy DICOM Parser or the Series Selector to select a specific series’ image as input, and the selection is stored in a well-known file, selected-images.json in the output of the aforementioned operators. Ideally, the selection rules are configured to only select one series so that the image for inference is deterministic, however, if there are multiple selected images, then only an arbitrary one is used.

A second image file containing the segmented lung from the original lung CT scan must also be present in the label image folder (/label_image by default).

The application saves the classification results in a csv file, preds_model.csv, in the output folder, /output by default, which needs to be mapped to a folder on the host volume. Two class labels are used, non-COVID and COVID, and the probabilities for each class are saved, shown below in as an example.

Copy
Copied!
            

0,0.99248177,non-COVID 1,0.0075181895,COVID

The application supports the following environment variables:

  • NVIDIA_CLARA_INPUT: The root folder where the application searches for AI result file, default /input

  • NVIDIA_CLARA_LABEL_IMAGE: The folder where the whole lung segmentation image is found, default /label_image

  • NVIDIA_CLARA_OUTPUT: The folder where the application saves generated DICOM instance files, default /output

  • NVIDIA_CLARA_LOGS: The folder for application logs, default /logs

  • NVIDIA_CLARA_PUBLISHING: The folder for publishing original and result images, for Clara Render Server, default /publish

  • NVIDIA_CLARA_SERIES_SELECTION: The folder where the application searches for selected series JSON file, default /series_selection. This is only needed when the Series Selector is used to select series and series image, and this folder needs to be mapped to the output of the Series Selector. When the DICOM Parser is used, converted image files as well as selected images file are all present in its output folder

  • NVIDIA_CLARA_TRTISURI: The listening port of Triton inference server, default localhost:8000

  • NVIDIA_CLARA_CONFIG_INFERENCE: The inference configuration file path, default app_base_inference_v2/config/config_inference.json

  • NVIDIA_CLARA_NII_EXTENSION: Image file format for internal conversion from .mhd to .nii, default .nii favoring shorter execution time instead of space

The application uses the classification_covid-19_v1 model, which was developed by NIH and NVIDIA for use in COVID-19 detection pipeline, but is yet to be published on ngc.nvidia.com. The input tensor of this model is of size 192x192x 64 with a single channel. The original image from the lung CT scan is cropped using the data in the lung segmentation image, so that only one simple inference is needed.

10.28.6.1.NVIDIA Triton Inference Server (formerly known as TRTIS)

The application performs inference on the NVIDIA Triton Inference Server, which provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server.

The application source code files are in the directory app_covid-19, as shown below

  • The ai4med directory contains the library modules from Clara Train SDK V2.0, mainly for its transforms functions.

  • The config directory contain the inference configuration file, with only client type and inference server needed for this application.

  • The custom_libs directory contains a custom writer which applies SoftMax to the inference results and then writes them to a csv file.

  • The inferers contains implementation of the simple inference client using the Triton API client library.

  • The model_loaders contains the implementation of the model loader.

  • The ngc and public directories contain documentation.

Copy
Copied!
            

/app_covid-19 ├── ai4med ├── app.py ├── config │   └── config_inference.json ├── custom_libs │   ├── custom_write_classification_result.py │   └── __init__.py ├── Dockerfile ├── inferers │   ├── __init__.py │   ├── trtis_inference_ctx.py │   ├── trtis_predictor.py │   └── trtis_simple_inferer.py ├── logging_config.json ├── main.py ├── model_loaders │   ├── __init__.py │   ├── trtis_config.py │   ├── trtis_model_loader.py │   └── trtis_session.py ├── ngc │   ├── metadata.json │   └── overview.md ├── public │   └── docs │   └── README.md └── requirements.txt /input /output

To see the internals of the container or to run the application within the container, please follow the following steps.

  1. See the next section on how to run the container with the required environment variables and volume mapping, and start the container by replacing the docker run command with the following: .. code-block:: bash

    docker run -it –entrypoint /bin/bash

  2. Once in the Docker terminal, ensure the current directory is /.

  3. Execute the following command: .. code-block:: bash

    python3 ./app_covid-19/main.py

  4. When finished, type exit.

10.28.9.1.Prerequisites

  1. Check if the Docker image of Triton (formerly TRTIS) has been imported into the local Docker registry with the following command; if not, it will be pulled from NVIDIA Docker registry when the test script runs. .. code-block:: bash

    docker images | grep tensorrtserver

  2. Download both the input dataset and the trained models from the MODEL SCRIPTS section for Clara Deploy AI COVID-19 Pipeline on NGC, by following the steps in the Setup section.

  3. As this operator also needs the segmentation image for the same original input images, the Clara Deploy AI Lung Segmentation operator needs to run first to generate the said image.

10.28.9.2.Step 1

Switch to your working directory (e.g. test_docker).

10.28.9.3.Step 2

Create, if they do not exist, the following directories under your working directory:

  • input containing the input image file

  • input\label_image\mhd containing the segmentation image file(s) in MHD format

  • output for the segmentation output

  • logs for the log files

  • models containing folders for model classification_COVID-19_v1.

10.28.9.4.Step 3

In your working directory,

  • Create a shell script (run_docker.sh, or another name if you prefer.

  • Copy the sample content below, change the APP_NAME to the full name of this docker, e.g. nvcr.io/ea-nvidia-clara/clara/ai-covid-19:0.5.0-2004.5.

  • Save the file.

Copy
Copied!
            

#!/bin/bash # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. SCRIPT_DIR=$(dirname "$(readlink -f "$0")") TESTDATA_DIR=$(readlink -f "${SCRIPT_DIR}"/../test-data) # Default app name. Change to acutally name, e.g. `nvcr.io/ea-nvidia-clara/clara/ai-lung:0.5.0-2004.5` APP_NAME="app_covid-19" # Default model name, used by the default app. If blank, all available models will be loaded. MODEL_NAME="classification_covid-19_v1" INPUT_TYPE="mhd" # Clara Deploy would launch the container when run in a pipeline with the following # environment variable to provide runtime information. This is for testing locally export NVIDIA_CLARA_TRTISURI="localhost:8000" # Specific version of the Triton Inference Server image used in testing TRTIS_IMAGE="nvcr.io/nvidia/tensorrtserver:19.08-py3" # Docker network used by the app and TRTIS Docker container. NETWORK_NAME="container-demo" # Create network docker network create ${NETWORK_NAME} # Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME} # (localhost:8000 will be used) RUN_TRITON="nvidia-docker run --name trtis --network${NETWORK_NAME}-d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \ -p 8000:8000 \ -v$(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME}${TRTIS_IMAGE}\ trtserver --model-store=/models" # Run the command to start the inference server Docker eval ${RUN_TRITON} # Display the command echo ${RUN_TRITON} # Wait until TRTIS is ready trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis) echo -n "Wait until TRTIS${trtis_local_uri}is ready..." while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do sleep 1 echo -n "." done echo "done" export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000" # Run ${APP_NAME} container. # Launch the app container with the following environment variables internally # to provide input/output path information. docker run --name test_docker --network ${NETWORK_NAME} -it --rm \ -v $(pwd)/input/${INPUT_TYPE}/:/input \ -v $(pwd)/input/label_image/mhd/:/label_image \ -v $(pwd)/output:/output \ -v $(pwd)/logs:/logs \ -e NVIDIA_CLARA_TRTISURI \ -e DEBUG_VSCODE \ -e DEBUG_VSCODE_PORT \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ ${APP_NAME} echo "${APP_NAME}has finished." # Stop TRTIS container echo "Stopping Triton(TRTIS) inference server." docker stop trtis > /dev/null # Remove network docker network remove ${NETWORK_NAME} > /dev/null

10.28.9.5.Step 4

Execute the script as shown below and wait for the application container to finish:

Copy
Copied!
            

./run_docker.sh

10.28.9.6.Step 5

Check the classification results in the file output/preds_model.csv showing entries similar to below:

Copy
Copied!
            

0,0.99248177,non-COVID 1,0.0075181895,COVID

An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses. For Clara Deploy AI COVID-19 Classification Pipeline you accept the terms and conditions that are mentioned in the license file inside the package.

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved.. Last updated on Feb 1, 2023.