Liver Segmentation Non-DICOM Pipeline
The Liver Segmentation Non-DICOM pipeline is one of the reference pipelines in Clara Deploy SDK. This pipeline takes a volume image in MetaImage or NIFTI format (mhd, nii, or nii.gz), containing axial slices of an abdominal CT study. The volume image is then directly processed by the Liver Segmentation AI model which generates the labeled liver and tumors (within the liver) as a binary mask on slices of the volume of the same size as those of the input. The liver is labeled as 1, tumors within the liver are labeled as 2, and the background is labeled as 0. In its final step, the pipeline saves the original and the segmented volume to the Clara Deploy Render Server for visualization on the Clara Dashboard.
The Liver Segmentation Non-DICOM pipeline is defined in the Clara Deploy pipeline definition language. This pipeline utilizes built-in reference containers to construct the following set of operators:
The liver-tumor-segmentation operator performs AI inference against the NVIDIA TensorRT Inference server to generate liver and tumor segmentation volume images.
The register-volume-images-for-rendering operator registers original and segmented volume images with the Clara Deploy Render Server for visualization.
The following is the details of pipeline definition, with comments describing each operator’s functions as well as input and output.
api-version: 0.3.0
name: liver-tumor-nondicom-pipeline
operators:
# liver-tumor-segmentation operator
# Input: `/input` containing volume image data, MHD format, with a single volume.
# Output: `/output` containing segmented volume image, MHD format.
# `/publish` containing original and segmented volume images, MHD format,
# along with rendering configuration file.
- name: liver-tumor-segmentation
description: Segmentation of liver and tumor inferencing using DL trained model.
container:
image: clara/ai-livertumor
tag: 0.3.0
input:
- path: /input
output:
- path: /output
name: segmentation
- path: /publish
name: rendering
services:
- name: trtis
# TensorRT Inference Server, required by this AI application.
container:
image: nvcr.io/nvidia/tensorrtserver
tag: 19.08-py3
command: ["trtserver", "--model-store=$(NVIDIA_CLARA_SERVICE_DATA_PATH)/models"]
# services::connections defines how the TRTIS service is expected to
# be accessed.
connections:
http:
# The name of the connection is used to populate an environment
# variable inside the operator's container during execution.
# This AI application inside the container needs to read this variable to
# know the IP and port of TRTIS in order to connect to the service.
- name: NVIDIA_CLARA_TRTISURI
port: 8000
# register-volume-images-for-rendering operator
# Input: Published original and segmented volume images, MHD format, along with rendering configuration file
# from liver-tumor-segmentation operator.
# Output: N/A. Input data will be sent to the destination, namely `renderserver` for Render Server DataSet Service.
- name: register-volume-images-for-rendering
description: Register volume images, MHD format, for rendering.
container:
image: clara/register-results
tag: 0.2.0
command: ["python", "register.py", "--agent", "renderserver"]
input:
- from: liver-tumor-segmentation
name: rendering
path: /input
Please refer to the parent section on how to register a pipeline, and how to execute the pipeline with Clara Deploy command line tool.