11.21. Clara Deploy AI COVID-19 Classification Pipeline

CAUTION: This is NOT for diagnostics use.

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Bootstrap page to install the Clara Deploy SDK.

This reference pipeline is designed to infer the probability of COVID-19 infection with the lung CT scan of a patient. A 3D lung segmentation model as well as a 3D classification model are used in this pipeline. A single aixial DICOM series of lung CT scan is the pipeline input, and the final output is the probabilities of COVID-19 and non-COVID-19 in csv format.

Once the DICOM instances are received, the pipeline is triggered to first convert the DICOM instances to a volume image in MetaImage format. This image is then used as the input to the lung segmentation operator which performas inference using the segmentation AI model, and generates a labeled segmentation as a binary mask on each slice of the volume, with the lung labeled as 1, and the background labeled as 0. In the next step, the original volume image along with the labeled segmentation image are used by the COVID-19 classification operator to infer the probabilities of COVID-19 using the COVID-19 classification model.

The pipeline generates the following outputs:

  • Lung segmentation image in MetaImage format

  • A new DICOM series for the segmentation image, optionally sent to a DICOM device

  • Probabilities of COVID-19 and non-COVID-19 in csv format

  • The original and segmented volumes in MetaImage format to the Clara Deploy Render Server for visualization.

11.21.1. Pipeline Definition

This pipeline is defined in the Clara Deploy pipeline definition language. It utilizes built-in reference containers to construct the following set of operators:

  • The dicom-reader operator converts input DICOM data into volume images in MetaImage format format.

  • The segmentation operator performs AI inference against the NVIDIA Triton Inference Server, formerly known as TRTIS, to generate segmentation volume images.

  • The classification operator performs AI inference against the NVIDIA Triton Inference Server to infer the probability of COVID-19.

  • The dicom-writer converts the segmented volume image into DICOM instances with a new series instance UID but the same study instance UID of the original DICOM.

  • The register-dicom-output operator registers the DICOM instances with the Clara Deploy DICOM Adapter which in turn stores the instance on external DICOM devices per configuration.

  • The register-volume-images-for-rendering operator registers original and segmented volume images with the Clara Deploy Render Server for visualization.

The following is the details of pipeline definition, with comments describing each operator’s functions as well as input and output.

api-version: 0.4.0
name: COVID-19-pipeline
operators:
# dicom reader operator
# Input: '/input' mapped directly to the input of the pipeline, which is populated by the DICOM Adaptor.
# Output:'/output' for saving converted volume image in MHD format to file whose name
#            is the same as the DICOM series instance ID.
- name: dicom-reader
  description: Converts DICOM instances into MHD, one file per DICOM series.
  container:
    image: clara/dicom-reader
    tag: latest
  input:
  - path: /input
  output:
  - path: /output
# lung-segmentation operator
# Input: `/input` containing volume image data, MHD format, with a single volume.
# Output: `/output` containing segmented volume image, MHD format.
#         `/publish` containing original and segmented volume images, MHD format,
#             along with rendering configuration file.
- name: lung-segmentation
  description: Segmentation of lung using DL trained model.
  container:
    image: clara/ai-lung
    tag: latest
  requests:
    gpu: 1
  input:
  - from: dicom-reader
    path: /input
  output:
  - path: /output
    name: segmentation
  - path: /publish
    name: rendering
  services:
  - name: trtis
  # TensorRT Inference Server, required by this AI application.
    container:
      image: nvcr.io/nvidia/tensorrtserver
      tag: 19.08-py3
      command: ["trtserver", "--model-store=$(NVIDIA_CLARA_SERVICE_DATA_PATH)/models"]
    # services::connections defines how the TRTIS service is expected to
    # be accessed. Clara Platform supports network ("http") and
    # volume ("file") connections.
    connections:
      http:
      # The name of the connection is used to populate an environment
      # variable inside the operator's container during execution.
      # This AI application inside the container needs to read this variable to
      # know the IP and port of TRTIS in order to connect to the service.
      - name: NVIDIA_CLARA_TRTISURI
        port: 8000
      # Some services need a specialized or minimal set of hardware. In this case
      # NVIDIA Tensor RT Inference Server [TRTIS] requires at least one GPU to function.
# dicom writer operator
# Input1: `/input` containing a volume image file, in MHD format, name matching the DICOM series instance UID.
# Input2: `/dicom` containing original DICOM instances, i.e. dcm file.
# Output: `/output` containing the DICOM instances converted from the volume image, with updated attributes
#         based on original DICOM instances.
- name: dicom-writer
  description: Converts MHD into DICOM instance with attributes based on the original instances.
  container:
    image: clara/dicom-writer
    tag: latest
  input:
  - from: lung-segmentation
    name: segmentation
    path: /input
  - path: /dicom
  output:
  - path: /output
    name: dicom
# register-volume-images-for-rendering operator
# Input: Published original and segmented volume images, MHD format, along with rendering configuration file
#        from the segmentation operator.
# Output: N/A. Input data will be sent to the destination, namely `renderserver` for Render Server DataSet Service.
- name: register-volume-images-for-rendering
  description: Register volume images, MHD format, for rendering.
  container:
    image: clara/register-results
    tag: latest
    command: ["python", "register.py", "--agent", "renderserver"]
  input:
  - from: lung-segmentation
    name: rendering
    path: /input
# register-dicom-output operator
# Input: `/input` containing DICOM instances in the named output, `dicom` from dicom-writer operator.
# Output: N/A. Input data will be sent to the destinations, namely DICOM devices, by the Clara DICOM SCU agent.
- name: register-dicom-output
  description: Register converted DICOM instances with Results Service to be sent to external DICOM devices.
  container:
    image: clara/register-results
    tag: latest
    command: ["python", "register.py", "--agent", "ClaraSCU", "--data", "[\"MYPACS\"]"]
  input:
  - from: dicom-writer
    name: dicom
    path: /input
# COVID-19 Classification operator
# Input: original image (DICOM series converted image) and segmented volume images, MHD format.
# Output: CSV file for classification resuls: the probabilities for both COVID-19 and non-COVID-19.
- name: classification-covid-19
  description: Classification of COVID-19 using DL model with original and segmentation images.
  container:
    image: clara/ai-covid-19
    tag: latest
  requests:
    gpu: 1
  input:
  - from: lung-segmentation
    name: segmentation
    path: /label_image
  - from: dicom-reader
    path: /input
  output:
  - path: /output
    name: classification
  services:
  - name: trtis
  # TensorRT Inference Server, required by this AI application.
    container:
      image: nvcr.io/nvidia/tensorrtserver
      tag: 19.08-py3
      command: ["trtserver", "--model-store=$(NVIDIA_CLARA_SERVICE_DATA_PATH)/models"]
    # services::connections defines how the TRTIS service is expected to
    # be accessed. Clara Platform supports network ("http") and
    # volume ("file") connections.
    connections:
      http:
      # The name of the connection is used to populate an environment
      # variable inside the operator's container during execution.
      # This AI application inside the container needs to read this variable to
      # know the IP and port of TRTIS in order to connect to the service.
      - name: NVIDIA_CLARA_TRTISURI
        port: 8000
      # Some services need a specialized or minimal set of hardware. In this case
      # NVIDIA Tensor RT Inference Server [TRTIS] requires at least one GPU to function.

11.21.2. Executing the Pipeline

Please refer to the How to Run a Reference Pipeline section to learn how to register a pipeline, configure the DICOM Adapter, and execute the pipeline.

11.21.3. License

An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.

11.21.4. Suggested Reading

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.