10.10. Malaria Microscopy Classification Operator

This operator performs classification of microscopy images and assigns a label of either “parasitized” or “unparasitized” to each image. It is deployed as a containerized AI inference application for use as one of the operators in the Clara Deploy pipelines.

This operator, in the form of a Docker container, expects an input folder (/input by default), which can be mapped to the host volume when the Docker container is started. Expected in this folder is a set of PNG images representing the microscope slides. Irrespective of the original input image size, at inference time each image is resized to 64x64.

After an image is classified, the operator saves the output as a new image with the classification label burnt-in on top of the image.

This application saves output images to an output folder (/output by default). If the class category of a specific image is “parasitized”, the operator burns in the letter “T” to the upper left corner of the output image; otherwise, the letter “F” is burnt in.

The name of each output file has the pattern output-<image_file_index>.png, where the file index can range from 1 to the number of input files provided to the operator.

The network architecture used to train this model is based on the 2015 academic publication “Deep Residual Learning for Image Recognition” by He et. al.

The dataset used to train this model is available at the NIH website. It contains PNG images of Giemsa-stained thin blood-smear slides acquired with a light microscope. The images in this dataset have been preprocessed (before being used in Clara deploy) with a level-set based algorithm to detect and segment the red blood cells.

The dataset contains a total of 27,558 cell images, with equal instances of parasitized and uninfected cells. The resultant model is named classification_malaria_v1. The input tensor is of shape 64x64x3, and the output is of shape 2x1.

10.10.4.1. NVIDIA TensorRT Inference Server (TRTIS)

This application performs inference on the NVIDIA TensorRT Inference Server (TRTIS), which provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model managed by the server.

The directories in the container are shown below.

Copy
Copied!
            

app_malaria/ ├── Dockerfile ├── main.py ├── Pipfile └── requirements.txt

If you want to see the internals of the container and manually run the application, follow these steps:

  1. Start the container in interactive mode. See the next section on how to run the container, and replace the docker run command with the following:

    Copy
    Copied!
                

    docker run --entrypoint /bin/bash

  2. Once in the Docker terminal, ensure the current directory is /app.

  3. Execute the following command:

    Copy
    Copied!
                

    python ./app_base_inference/main.py

  4. When finished, type exit.

10.10.7.1. Prerequisites

  1. Ensure the Docker image of TRTIS has been imported into the local Docker repository with the following command:

    Copy
    Copied!
                

    docker images

  2. Look for the image name TRTIS and the correct tag for the release (e.g. 20.07-v1-py3).

  3. Download both the input dataset and the trained model from the MODEL SCRIPTS section for this container on NGC, following the steps in the Setup section.

10.10.7.2. Step 1

Switch to your working directory (e.g. test_malaria).

10.10.7.3. Step 2

Create, if they do not exist, the following directories under your working directory:

  • input containing the input image file
  • output for the segmentation output
  • publish for publishing data for the Render Server
  • logs for the log files
  • models for the model repository. Copy the contents of the classification_malaria_v1 folder into this model repository folder.

10.10.7.4. Step 3

In your working directory, create a shell script (e.g. run_malaria.sh or another name if you prefer), copy the sample content below, and save it.

Copy
Copied!
            

#!/bin/bash # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. # Define the name of the app (aka operator), assumed the same as the project folder name APP_NAME="app_malaria" # Define the TenseorRT Inference Server Docker image, which will be used for testing # Use either local repo or NVIDIA repo TRTIS_IMAGE="nvcr.io/nvidia/tritonserver:20.07-v1-py3" # Launch the container with the following environment variables # to provide runtime information. export NVIDIA_CLARA_TRTISURI="localhost:8000" # Define the model name for use when launching TRTIS with only the specific model MODEL_NAME="classification_malaria_v1" # Create a Docker network so that containers can communicate on this network NETWORK_NAME="container-demo" # Create network docker network create ${NETWORK_NAME} # Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME} # (localhost:8000 will be used) nvidia-docker run --name trtis --network ${NETWORK_NAME} -d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \ -p 8000:8000 \ -v $(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME} ${TRTIS_IMAGE} \ tritonserver --model-repository=/models # Wait until TRTIS is ready trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis) echo -n "Wait until TRTIS${trtis_local_uri}is ready..." while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do sleep 1 echo -n "." done echo "done" export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000" # Run ${APP_NAME} container. # Launch the app container with the following environment variables internally, # to provide input/output path information. docker run --name ${APP_NAME} --network ${NETWORK_NAME} -it --rm \ -v $(pwd)/input:/input \ -v $(pwd)/output:/output \ -v $(pwd)/logs:/logs \ -v $(pwd)/publish:/publish \ -e NVIDIA_CLARA_TRTISURI \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ ${APP_NAME} echo "${APP_NAME}is done." # Stop TRTIS container echo "Stopping TRTIS" docker stop trtis > /dev/null # Remove network docker network remove ${NETWORK_NAME} > /dev/null


10.10.7.5. Step 4

Execute the script as shown below and wait for the application container to finish:

Copy
Copied!
            

`./run_malaria.sh`


10.10.7.6. Step 5

Check for classification results in the output directory. Ensure that the number of output PNG images is the same as the number of input images.

10.10.7.7. Step 6

To visualize the classification results, use any PNG image viewer (such as GIMP).

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved. Last updated on Jun 28, 2023.