12.4. Vnet Segmentation pipeline

Vnet Segmentation pipelines are defined using the Pipeline definition language (that Clara supports). Sample Vnet Segmentation pipelines are located in clara-reference-pipelines folder under SDK zip. All operators are deployed under “clara” folder (tag 0.2.0) and are defined with the same name in the pipeline definition. There are two pipelines defined with Vnet segmentation. “ct-vnetseg.yaml” pipeline performs segmentation on a CT abdominal volume, “ct-recon-vnetseg.yaml” pipeline first performs reconstruction followed by Vnet Segmentation within the same pipeline.

Sample dataset in dicom format is available for both the pipelines within the SDK (under test-data folder in SDK zip). If using the sample dataset, unzip it before using. CT_VOL_DCM_0.0.1.zip is the data for “ct-vnetseg.yaml” pipeline and raw_dicom_abd_D2_r2_0.0.1.zip is the data for “ct-recon-vnetseg.yaml” pipeline.

Vnet Segmentation pipeline (ct-vnetseg.yaml) definition consists of 4 operators (dicom-reader, ai-vnet, dicom-writer, register-dicom-results). Dicom-reader operator converts input DICOM data into MHD format. Ai-vnet segments the input data coming from dicom-reader. Dicom-writer converts the segmented volume mask into DICOM format. Register-dicom-results operator transfers the DICOM volume (from dicom-writer) to configured PACS destination.

Recon+Vnet Segmentation pipeline (ct-recon-vnetseg.yaml) definition consists of 5 operators (dicom-reader, recon-operator, ai-vnet, dicom-writer, register-dicom-results). Dicom-reader operator converts input DICOM data into MHD format. Recon-operator reconstructs the input data coming from dicom-reader. Ai-vnet segments the reconstructed data coming from recon-operator. Dicom-writer converts the segmented volume mask from ai-vnet into DICOM format. Register-dicom-results operator transfers the DICOM volume (from dicom-writer) to configured PACS destination.

All operators used in both pipelines are explained below in detail.

  • dicom-reader: This operator is used in both pipelines.

    Input dicom data is mounted on /input folder and the output from this operator goes into mounted /output folder. Dicom-reader operator reads in the input DICOM image, converts them into MHD. Output of dicom-reader becomes input of recon-operator as specified in the pipeline.

    Dicom-reader is the first container in the pipeline. For the first container, “from” field is not required in the “input” definition. Dicom-Adapter will pick the first operator in the pipeline and will send the data to its mounted /input folder.

    Copy
    Copied!
                

    - name: dicom-reader description: Converts DICOM data into MHD. container: image: clara/dicom-reader # dicom-reader container tag: latest input: # specify where the input is coming from for this container - path: /input # incoming data is mounted on container's /input folder output: # specify the ouput for this container - path: /output

  • recon-operator: Recon-operator is used only in “ct-recon-vnetseg.yaml” pipeline.

    Recon operators takes the result from dicom-reader and mounts it to /app/in folder. Recon-operator has 3 output folders. ‘out’ folder gets the actual reconstructed volume. ‘logs’ folder gets all logs from recon-operator. ‘geom’ folder contains the geometry file created by recon-operator. Recon-operator defines the reconstruction parameters under ‘variables’. All parameters are defined in detail in recon operator definition. These variables are passed by platform as environment variables to the recon-operator at runtime. Reconstruction is GPU accelerated, and the need for GPU is specified with ‘request’ under recon-operator definition.

    Copy
    Copied!
                

    - name: recon-operator description: CT reconstruction algorithm on 3D cone beam projections container: image: clara/recon-operator tag: latest variables: # All supported reconstruction parameters with values used for the sample dataset nvrtkalgo: fdk nvrtkhardware: gpu nvrtkhann: 0.95 nvrtkdimension: 512,512,147 nvrtkspacing: 0.668,0.668,3 nvrtkorigin: -171,-171,0 nvrtkindir: in nvrtkoutdir: out nvrtkgeomdir: geom nvrtklogs: logs nvrtkproj: 360 # projections in the simulated abdominal scan are 360 requests: gpu: 1 memory: 10240 input: # specify the inputs coming to recon-operator under this field - from: dicom-reader # input is coming from dicom-reader's output folder path: /app/in # input mount folder for recon-operator output: # specify the output from recon-operator under this field - name: out # folder name that will contain the output, this name will be used if this folder is an input to any other operator path: /app/out # mounted path for out folder - name: logs # folder name for logs path: /app/logs # mounted folder for logs - name: geom # folder name for geometry file(s) created from within recon-operator path: /app/geom # mounted folder for geometry file(s)

  • ai-vnet: This operator is used in both the pipelines. Ai-vnet’s definition in ct-recon-vnetseg.yaml is described below.

    Ai-vnet operator takes the result from recon-operator and mounts it to /app/input folder. Ai-vnet’s output goes into /app/output folder. This operator defines the segmentation parameters under ‘variables’. All parameters are defined in detail in Vnet operator definition. These variables are passed by platform as environment variables to the ai-vnet operator at runtime.

    This operator uses TRTIS service to make inference. Models must be copied at pre-configured location specified by Clara platform as NVIDIA_CLARA_SERVICE_DATA_PATH.

    In order for operators to contact the TRTIS service, users must define an http connection in the service definition, in the form of a name and the port which is exposed by the service container. The name for the port is mounted as an environment variable in the operators that declare the need for that service. The value of the environment variable is the URI of the service. In the example above, the environment variable NVIDIA_CLARA_TRTISURI will get mounted as an environment variable in the ai-vnet operator. The value of the variable will be :8000, where the value of the IP is assigned after the service is deployed.

    Copy
    Copied!
                

    - name: ai-vnet description: Segmentation algorithm on reconstructed volume container: image: clara/ai-vnet tag: latest requests: gpu: 1 memory: 8196 input: # specify the input for the container under this field - from: recon-operator # input is coming from recon-operator name: out # all input is in the folder called out from recon-operator's output path: /app/input # mounted on folder /app/input output: - path: /app/output name: segmentation # output is present in folder called segmentation variables: vnet_seg_indir: input vnet_seg_outdir: output vnet_seg_roi: 88,440,53,465,61,142 # roi for segmentation model to start processing services: # define the services used by the operator - name: trtis # trtis inference service is used by ai-vnet container: image: nvcr.io/nvidia/tritonserver # trtserver looks for models, and the models must be saved in a pre-configured folder. # Folder location is specified to TRTIS server as below tag: 20.07-v1-py3 command: ["tritonserver", "--model-repository=$(NVIDIA_CLARA_SERVICE_DATA_PATH)/models"] connections: http: - name: NVIDIA_CLARA_TRTISURI port: 8000

Note

Input field for ai-vnet operator will be different for ct-vnetseg.yaml pipeline. In ct-vnetseg.yaml pipeline, input to ai-vnet is coming from dicom-reader.

  • dicom-writer: Segmented mask goes as input to dicom-writer operator and is mounted on /input folder. Dicom-writer writes the results, on /output folder.

    Copy
    Copied!
                

    - name: dicom-writer description: Converts MHD from Vnet into DICOM container: image: clara/dicom-writer tag: latest requests: memory: 4096 input: # specify inputs coming to dicom-writer under this field - from: ai-vnet # input to dicom-writer is coming from ai-vnet operator name: segmentation # "segmentation" folder will be mounted from ai-vnet's output folder path: /input # Input folder is mounted to /input folder of dicom-writer container - path: /dicom output: # specify output of dicom-writer - path: /output name: dicom # dicom-writer writes everything within the folder "dicom" under its output folder

  • register-dicom-results: Takes the output from dicom-writer operator and sends the dicom images to a pre-configured PACS destination.

    ‘command’ field specifies the startup for register-results operator with several arguments. Register-results operator uses dicom adapter configurations to register results. “–agent” field is used as a filter to query for results published by pipelines. In this pipeline, “ClaraSCU” is configured as ae-title for the scu services. “–data” argument specifies the name of destination PACS configurations. In example below “MYPACS” is one such configuration, and it can have multiple values if the results has to be shipped to multiple PACS destinations”. Refer to the dicom-server-config.yaml shipped with the SDK. For additional details refer to results-services operator and results

    Copy
    Copied!
                

    - name: register-dicom-output description: Register reconstructed DICOM volume to external DICOM devices. container: image: clara/register-results tag: latest command: ["python", "register.py", "--agent", "ClaraSCU", "--data", "[\"MYPACS\"]"] input: # specify input for register-results - from: dicom-writer # inputs comes from dicom-writer operator name: dicom # input comes from folder with name "dicom" from dicom-writer's output folder path: /input

Refer to the How to run a Reference Pipeline section for details on creating a new pipeline and the Pipeline Definition Language for details on the pipeline definition language.

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved.. Last updated on Feb 1, 2023.