10.14. Chest X-ray Classification Operator
This example is a containerized AI inference application, developed for use as one of the operators in the Clara Deploy pipelines. The application is built on the base AI application container, which provides the application framework to deploy Clara Train TLT trained models. The same execution configuration file, set of transform functions, and simple inference logic are used; however, inference is performed on the TensorRT Inference Server.
This application, in the form of a Docker container, expects an input folder (/input
by default),
which can be mapped to the host volume when the Docker container is started. This folder must
contain a 16-bit PNG image file representing a chest x-ray (CXR).
After an image is classified, the operator saves the output as a new image with the classification
labels burnt-in on top of the image. The application saves the segmentation results to an output
folder (/output
by default).
The model supports 15 categories:
Nodule
Mass
Distortion of Pulmonary Architecture
Pleural Based Mass
Granuloma
Fluid in Pleural Space
Right Hilar Abnormality
Left Hilar Abnormality
Major Atelectasis
Infiltrate
Scarring
Pleural Fibrosis
Bone/Soft Tissue Lesion
Cardiac Abnormality
COPD
The top three class categories, with probabilities, are burnt-in to the upper-left corner of the output image. If the probability is high enough (0.5), a category is written out with red color; otherwise, yellow color is used.
The name of each output file has the pattern output-<original file name>.png
.
The operator also outputs a CSV file (output-<original file name>.csv
) that includes the input
file path and top three classifications with probabilities
(e.g., Granuloma:0.68,Nodule:0.22,COPD:0.02
)
The application uses the classification_chestxray_v1
model, which uses the
tensorflow_graphdef
platform. The input tensor is of shape 256x256x3
with a single channel, and
the output tensor is of shape 15
.
The NVIDIA® Clara Train Transfer Learning Toolkit (TLT) for Medical Imaging provides pre-trained models unique to medical imaging, with additional capabilities such as integration with the AI-assisted Annotation SDK for speeding up annotation of medical images. This allows access to AI-assisted labeling [Reference].
The application uses the classification_chestxray_v1
model provided by the NVIDIA Clara Train
TLT for chest x-ray classification, which is converted from the
TensorFlow Checkpoint model to tensorflow_graphdef
using the TLT model export tool.
You can download the model using the following commands:
# Download NGC Catalog CLI
wget https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip ngccli_cat_linux.zip && rm ngccli_cat_linux.zip ngc.md5 && chmod u+x ngc
# Configure API key (Refer to https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html#generating-api-key)
./ngc config set
# Download the model
./ngc registry model download-version nvidia/med/classification_chestxray:1
Note: NGC Catalog CLI is needed to download models without Clara Train SDK: Please follow the NGC documentation to configure the CLI API key.
Detailed model information can be found at (downloaded model folder)/docs/Readme.md
.
This application also uses the same transforms library and configuration file for
the validation/inference pipeline during TLT model training. The key model attributes (e.g.
the model name and network input dimensions), are saved in the config_inference.json
file and consumed by the application at runtime.
10.14.4.1.NVIDIA TensorRT Inference Server (TRTIS)
This application performs inference on the NVIDIA TensorRT Inference Server (TRTIS), which provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model managed by the server.
The directories in the container are shown below.
The application code under /App
is from the base container, except for the files
in the config
directory, which is model specific.
The sdk_dist
directory contains the Clara Train TLT transforms library. The medical
directory
contains compiled modules from Clara Train TLT, and the writer
directory contains a specialized
writer that saves classification results to a PNG image file.
/
├── app_base_inference_v2
├── ai4med
├── config
│ ├── config_render.json
│ ├── config_inference.json
│ └── __init__.py
├── inferers
├── model_loaders
├── ngc
├── public
│ └── docs
│ └── README.md
├── utils
├── writers
│ ├── __init__.py
| ├── classification_result_writer.py
│ ├── mhd_writer.py
│ └── writer.py
├── app.py
├── Dockerfile
├── executor.py
├── logging_config.json
├── main.py
└── requirements.txt
/input
/output
/publish
/logs
If you want to see the internals of the container and manually run the application, follow these steps:
Start the container in interactive mode. See the next section on how to run the container, and replace the
docker run
command with the following:docker run --entrypoint /bin/bash
Once in the Docker terminal, ensure the current directory is
/app
.Execute the following command:
python ./app_base_inference/main.py
When finished, type
exit
.
10.14.7.1.Prerequisites
Ensure the Docker image of TRTIS has been imported into the local Docker repository with the following command:
docker images
Look for the image name
TRTIS
and the correct tag for the release (e.g.19.08-py3
).Download both the input dataset and the trained model from the
MODEL SCRIPTS
section for this container on NGC, following the steps in theSetup
section.
10.14.7.2.Step 1
Change to your working directory (e.g. test_chestxray
).
10.14.7.3.Step 2
Create, if they do not exist, the following directories under your working directory:
input
containing the input image fileoutput
for the segmentation outputpublish
for publishing data for the Render Serverlogs
for the log filesmodels
for the model repository. Copy the contents of theclassification_chestxray_v1
folder into this model repository folder.
10.14.7.4.Step 3
In your working directory, create a shell script (e.g. run_chest.sh
or another name if you
prefer), copy the sample content below, and save it.
#!/bin/bash
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Define the name of the app (aka operator); assumed the same as the project folder name
APP_NAME="app_chestxray"
# Define the TenseorRT Inference Server Docker image, which will be used for testing
# Use either local repo or NVIDIA repo
TRTIS_IMAGE="nvcr.io/nvidia/tensorrtserver:19.08-py3"
# Launch the container with the following environment variables
# to provide runtime information
export NVIDIA_CLARA_TRTISURI="localhost:8000"
# Define the model name for use when launching TRTIS with only the specific model
MODEL_NAME="classification_chestxray_v1"
# Create a Docker network so that containers can communicate on this network
NETWORK_NAME="container-demo"
# Create network
docker network create ${NETWORK_NAME}
# Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME}
# (localhost:8000 will be used)
nvidia-docker run --name trtis --network ${NETWORK_NAME} -d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
-p 8000:8000 \
-v $(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME} ${TRTIS_IMAGE} \
trtserver --model-store=/models
# Wait until TRTIS is ready
trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis)
echo -n "Wait until TRTIS${trtis_local_uri}is ready..."
while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do
sleep 1
echo -n "."
done
echo "done"
export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000"
# Run ${APP_NAME} container
# Launch the app container with the following environment variables internally,
# to provide input/output path information
docker run --name ${APP_NAME} --network ${NETWORK_NAME} -it --rm \
-v $(pwd)/input:/input \
-v $(pwd)/output:/output \
-v $(pwd)/logs:/logs \
-v $(pwd)/publish:/publish \
-e NVIDIA_CLARA_TRTISURI \
-e NVIDIA_CLARA_NOSYNCLOCK=TRUE \
${APP_NAME}
echo "${APP_NAME}is done."
# Stop TRTIS container
echo "Stopping TRTIS"
docker stop trtis > /dev/null
# Remove network
docker network remove ${NETWORK_NAME} > /dev/null
10.14.7.5.Step 4
Execute the script as shown below and wait for the application container to finish:
./run_chest.sh
10.14.7.6.Step 5
Check for the following output files:
Classification results in the output
directory:
output-<input file name>.csv
output-<input file name>.png
10.14.7.7.Step 6
To visualize the segmentation results, or the rendering on Clara Dashboard, please refer to sections on visualization.