9.28. Clara Deploy AI COVID-19 Classification Operator

CAUTION: This is NOT for diagnostics use.

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Bootstrap page to install the Clara Deploy SDK.

This example is a containerized AI inference application, developed for use as one of the operators in the Clara Deploy pipelines. This application uses the original image from a lung CT scan and a segmented lung image, both in NIfTI or MetaImage format, to infer the presence of COVID-19. The application is built on the Clara Deploy Python base container, which provides the interfaces with Clara Deploy SDK. Inference is performed on the NVIDIA Triton Inference Server (Triton), formerly known as TensorRT Inference Server (TRTIS).

The application, in the form of a Docker container, expects an input folder (/input by default), which needs to be mapped to a folder on the host volume when the Docker container is started. This folder must contain a volume image file of the original lung CT scan in the NIfTI or MetaImage format. The volume image must also be constructed from a single series of a DICOM study, typically an axial series with the data type of the original primary.

A second image file containing the segmented lung from the original lung CT scan must also be present in the label image folder (/label_image by default).

The application saves the classification results in a csv file, preds_model.csv, in the output folder, /output by default, which needs to be mapped to a folder on the host volume. Two class labels are used, non-COVID and COVID, and the probabilities for each class are saved, shown below in as an example.

Copy
Copied!
            

0,0.99248177,non-COVID 1,0.0075181895,COVID

The application uses the classification_covid-19_v1 model, which was developed by NIH and NVIDIA for use in COVID-19 detection pipeline, but is yet to be published on ngc.nvidia.com. The input tensor of this model is of size 192x192x 64 with a single channel. The original image from the lung CT scan is cropped using the data in the lung segmentation image, so that only one simple inference is needed.

9.28.4.1. NVIDIA Triton Inference Server (formerly known as TRTIS)

The application performs inference on the NVIDIA Triton Inference Server, which provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server.

The application source code files are in the directory app_covid-19, as shown below

  • The ai4med directory contains the library modules from Clara Train SDK V2.0, mainly for its transforms functions.

  • The config directory contain the inference configuration file, with only client type and inference server needed for this application.

  • The custom_libs directory contains a custom writer which applies SoftMax to the inference results and then writes them to a csv file.

  • The inferers contains implementation of the simple inference client using the Triton API client library.

  • The model_loaders contains the implementation of the model loader.

  • The ngc and public directories contain documentation.

Copy
Copied!
            

/app_covid-19 ├── ai4med ├── app.py ├── config │   └── config_inference.json ├── custom_libs │   ├── custom_write_classification_result.py │   └── __init__.py ├── Dockerfile ├── inferers │   ├── __init__.py │   ├── trtis_inference_ctx.py │   ├── trtis_predictor.py │   └── trtis_simple_inferer.py ├── logging_config.json ├── main.py ├── model_loaders │   ├── __init__.py │   ├── trtis_config.py │   ├── trtis_model_loader.py │   └── trtis_session.py ├── ngc │   ├── metadata.json │   └── overview.md ├── public │   └── docs │   └── README.md └── requirements.txt /input /output

To see the internals of the container or to run the application within the container, please follow the following steps.

  1. See the next section on how to run the container with the required environment variables and volume mapping, and start the container by replacing the docker run command with the following: .. code-block:: bash

    docker run -it –entrypoint /bin/bash

  2. Once in the Docker terminal, ensure the current directory is /.

  3. Execute the following command: .. code-block:: bash

    python3 ./app_covid-19/main.py

  4. When finished, type exit.

9.28.7.1. Prerequisites

  1. Check if the Docker image of Triton (formerly TRTIS) has been imported into the local Docker registry with the following command; if not, it will be pulled from NVIDIA Docker registry when the test script runs. .. code-block:: bash

    docker images | grep tensorrtserver

  2. Download both the input dataset and the trained models from the MODEL SCRIPTS section for Clara Deploy AI COVID-19 Pipeline on NGC, by following the steps in the Setup section.

  3. As this operator also needs the segmentation image for the same original input images, the Clara Deploy AI Lung Segmentation operator needs to run first to generate the said image.

9.28.7.2. Step 1

Switch to your working directory (e.g. test_docker).

9.28.7.3. Step 2

Create, if they do not exist, the following directories under your working directory:

  • input containing the input image file

  • input\label_image\mhd containing the segmentation image file(s) in MHD format

  • output for the segmentation output

  • logs for the log files

  • models containing folders for model classification_COVID-19_v1.

9.28.7.4. Step 3

In your working directory,

  • Create a shell script (run_docker.sh, or another name if you prefer.

  • Copy the sample content below, change the APP_NAME to the full name of this docker, e.g. nvcr.io/ea-nvidia-clara/clara/ai-covid-19:0.5.0-2004.5.

  • Save the file.

Copy
Copied!
            

#!/bin/bash # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. SCRIPT_DIR=$(dirname "$(readlink -f "$0")") TESTDATA_DIR=$(readlink -f "${SCRIPT_DIR}"/../test-data) # Default app name. Change to acutally name, e.g. `nvcr.io/ea-nvidia-clara/clara/ai-lung:0.5.0-2004.5` APP_NAME="app_covid-19" # Default model name, used by the default app. If blank, all available models will be loaded. MODEL_NAME="classification_covid-19_v1" INPUT_TYPE="mhd" # Clara Deploy would launch the container when run in a pipeline with the following # environment variable to provide runtime information. This is for testing locally export NVIDIA_CLARA_TRTISURI="localhost:8000" # Specific version of the Triton Inference Server image used in testing TRTIS_IMAGE="nvcr.io/nvidia/tensorrtserver:19.08-py3" # Docker network used by the app and TRTIS Docker container. NETWORK_NAME="container-demo" # Create network docker network create ${NETWORK_NAME} # Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME} # (localhost:8000 will be used) RUN_TRITON="nvidia-docker run --name trtis --network${NETWORK_NAME}-d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \ -p 8000:8000 \ -v$(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME}${TRTIS_IMAGE}\ trtserver --model-store=/models" # Run the command to start the inference server Docker eval ${RUN_TRITON} # Display the command echo ${RUN_TRITON} # Wait until TRTIS is ready trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis) echo -n "Wait until TRTIS${trtis_local_uri}is ready..." while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do sleep 1 echo -n "." done echo "done" export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000" # Run ${APP_NAME} container. # Launch the app container with the following environment variables internally # to provide input/output path information. docker run --name test_docker --network ${NETWORK_NAME} -it --rm \ -v $(pwd)/input/${INPUT_TYPE}/:/input \ -v $(pwd)/input/label_image/mhd/:/label_image \ -v $(pwd)/output:/output \ -v $(pwd)/logs:/logs \ -e NVIDIA_CLARA_TRTISURI \ -e DEBUG_VSCODE \ -e DEBUG_VSCODE_PORT \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ ${APP_NAME} echo "${APP_NAME}has finished." # Stop TRTIS container echo "Stopping Triton(TRTIS) inference server." docker stop trtis > /dev/null # Remove network docker network remove ${NETWORK_NAME} > /dev/null


9.28.7.5. Step 4

Execute the script as shown below and wait for the application container to finish:

Copy
Copied!
            

./run_docker.sh


9.28.7.6. Step 5

Check the classification results in the file output/preds_model.csv showing entries similar to below:

Copy
Copied!
            

0,0.99248177,non-COVID 1,0.0075181895,COVID


An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.

© Copyright 2018-2021, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.