Liver Segmentation Non-DICOM Pipeline
The Liver Segmentation Non-DICOM pipeline is one of the reference pipelines in Clara Deploy SDK. This pipeline takes a volume image in MetaImage or NIFTI format (mhd, nii, or nii.gz), containing axial slices of an abdominal CT study. The volume image is then directly processed by the Liver Segmentation AI model which generates the labeled liver and tumors (within the liver) as a binary mask on slices of the volume of the same size as those of the input. The liver is labeled as 1, tumors within the liver are labeled as 2, and the background is labeled as 0. In its final step, the pipeline saves the original and the segmented volume to the Clara Deploy Render Server for visualization on the Clara Dashboard.
The Liver Segmentation Non-DICOM pipeline is defined in the Clara Deploy pipeline definition language. This pipeline utilizes built-in reference containers to construct the following set of operators:
- The liver-tumor-segmentation operator performs AI inference against the NVIDIA Triton Inference server to generate liver and tumor segmentation volume images.
- The register-volume-images-for-rendering operator registers original and segmented volume images with the Clara Deploy Render Server for visualization.
The following is the details of pipeline definition, with comments describing each operator’s functions as well as input and output.
api-version: 0.4.0
name: liver-tumor-nonDICOM-pipeline
operators:
# liver-tumor-segmentation operator
# Input: `/input` containing volume image data, MHD format, with a single volume.
# Output: `/output` containing segmented volume image, MHD format.
# `/publish` containing original and segmented volume images, MHD format,
# along with rendering configuration file.
- name: liver-tumor-segmentation
description: Segmentation of liver and tumor inferencing using DL trained model.
container:
image: clara/ai-livertumor
tag: latest
requests:
gpu: 1
input:
- path: /input
output:
- path: /output
name: segmentation
- path: /publish
name: rendering
services:
- name: trtis
# Triton Inference Server, required by this AI application.
container:
image: nvcr.io/nvidia/tritonserver
tag: 20.07-v1-py3
command: ["tritonserver", "--model-repository=$(NVIDIA_CLARA_SERVICE_DATA_PATH)/models"]
# services::connections defines how the TRTIS service is expected to
# be accessed. Clara Platform supports network ("http") and
# volume ("file") connections.
connections:
http:
# The name of the connection is used to populate an environment
# variable inside the operator's container during execution.
# This AI application inside the container needs to read this variable to
# know the IP and port of TRTIS in order to connect to the service.
- name: NVIDIA_CLARA_TRTISURI
port: 8000
# Some services need a specialized or minimal set of hardware. In this case
# NVIDIA Tensor RT Inference Server [TRTIS] requires at least one GPU to function.
# register-volume-images-for-rendering operator
# Input: Published original and segmented volume images, MHD format, along with rendering configuration file
# from liver-tumor-segmentation operator.
# Output: N/A. Input data will be sent to the destination, namely `renderserver` for Render Server DataSet Service.
- name: register-volume-images-for-rendering
description: Register volume images, MHD format, for rendering.
container:
image: clara/register-results
tag: latest
command: ["python", "register.py", "--agent", "renderserver"]
input:
- from: liver-tumor-segmentation
name: rendering
path: /input
Please refer to the How to Run a Reference Pipeline section to learn how to register a pipeline, configure the DICOM Adapter, and execute the pipeline.