7.3. Clara Deploy SDK VNet Segmentation Operator¶
VNet Segmentation operator within Clara Deploy SDK performs segmentation and labeling of organs in a CT abdominal reconstructed volume. The application must also use the NVIDIA TensorRT Inference Server (TRTIS) which is hosted as a service on the Clara Deploy SDK. To use TRTIS inference API, the application is dependent on the TRTIS Python API client package. TRTIS Client package is installed in the operator.
7.3.2. Data Input¶
VNet based Segmentation Operator uses MHD image format for its input and output files.
CT abdominal reconstructed images are fed into VNet Segmentation Operator algorithm as a MHD volume.
7.3.3. Data Output¶
Operator outputs segmented mask with labeled organs in MHD format.
The following parameters are supported:
ROI: Region of interest in terms of X,Y,Z pixel locations. (x1,x2,y1,y2,z1,z2)
Pre Axis Codes: 3-character axis codes (e.g., LPS or RAS) for transposing 3d matrix after loading the image. Default value is ‘PRS’
Post Axis Codes: 3-character axis codes (e.g., LPS or RAS) for transposing 3d matrix before writing output. Default value is original axis codes from the input image.
The VNet segmentation Operator depends on TRTIS server for inference and TRTIS client for making inference calls to the Server. The TRTIS server must be running for the operator to execute.
7.3.6. Directory Structure¶
This sample includes the following folders and files:
main.py Entry point to VNet segmentation Operator. This script sets the inference context on TRTIS server. It prepares the input data and variables for further inference execution. It uses the Vnet application (app.py) to kick start the inference process.
Supported parameters as environment variables are defined in Dockerfile definition below.
This script creates the docker image.
TRTIS client is installed while creating the docker image.
Default environment variables are specified in the docker image creation. Environment variables can be updated during run time, either locally or within Clara Platform. Following environment variables are supported:
ENV vnet_seg_infile recon.mhd # input MHD filename ENV vnet_seg_outfile recon.vnet.seg.mhd # output MHD segmented mask ENV vnet_seg_indir data # input data folder ENV vnet_seg_outdir data # output data folder ENV vnet_seg_roi 0,-1,0,-1,0,-1 # Region of interest for segmentation ENV vnet_seg_target_shape 144,144,144,1,1 ENV vnet_seg_pre_axcodes None ENV vnet_seg_post_axcodes None ENV vnet_seg_pre_interp_order 0 ENV vnet_seg_post_interp_order 0 ENV NVIDIA_CLARA_TRTISURI trtis:8000
This script executes the Vnet Segmentation docker container locally with all the required arguments specified as environment variables. Docker image must be created before running this script.
This script ensures that TRTIS server is up and running before building and running VNet segmentation operator.
VNet Segmentation model is mounted on TRTIS server (to use during inference)
VNet Segmentation Operator is executed.
7.3.7. Execution of Vnet Segmentation Operator (Locally)¶
18.104.22.168. Run the Docker Image¶
Before executing the VNet segmentation operator locally, ensure that all environment variables are correctly set and appropriate data folders are mounted. Update the script with following:
Get the data: If sample dataset is desired, it is present in SDK zip under test-data folder. Sample dataset name for VNet is CT_VOL_DCM_0.0.1.zip. If using the sample dataset, unzip it before using.
Input Mount folder: -v “folder path”:/app/input
Output Mount folder: -v “folder path”:/app/output
If required, environment variables must be updated from default. Default values are specified above in Dockerfile definition.
Default mount path and few environment variables are set in run_vnet_docker.sh
To build the Vnet segmentation Docker container:
To run the Docker image standalone:
# Run the Docker container using the script ./run_vnet_docker.sh
The output segmented and labeled volume (in MHD format) is saved in the Output mount folder as specified in the script.
7.3.8. Execution of VNet Segmentation Operator within Clara¶
Following are the execution steps on Clara:
Complete Clara installation and deployment procedure. Ensure dicom-reader, dicom-writer, recon-operator, ai-vnet and register-dicom-service is deployed.
Ensure all required services are up and running. Refer to installation guide for correct startup.
Pipeline Definition file: Get access to “ct-recon-vnetseg.yaml” and “ct-vnetseg.yaml” pipeline definition file and update the variables if required. These files are located in SDK zip under clara-reference-pipelines folder.
TRTIS service is pre-configured in the pipeline definition.
Get access to DICOM projection dataset (raw_dicom_abd_D2_r2_0.0.1.zip) required for ct-recon-vnetseg.yaml and DICOM CT Volume (abdomen) (CT_VOL_DCM_0.0.1.zip) dataset required for ct-vnetseg.yaml. Both these datasets are present in test-data folder under SDK zip.
Steps for ct-vnetseg.yaml are illustrated below. Exact same steps must be followed for ct-recon-vnetseg.yaml (with the change in input dataset)
Generate pipeline id using following command:
clara create pipelines -p ct-vnetseg.yaml
Update the dicom adapter configuration with the pipeline ID and AE title used for this pipeline. If desired create a new AE-title and configure it.
Restart dicom adapter
clara dicom stop clara dicom start
Execute store scu command to kick start the pipeline.
# Run the Docker container using the script storescu -v +sd +r -xb -aet "DCM4CHEE" -aec <AETITLE> <LOCAL_IP> <PORT> <INPUT_DICOM_DIRECTORY>
Open the configured PACS and verify the output, register-dicom-service must have pushed the segmented image.
Open http://localhost:8000, visualize the operators running in the pipeline. Once all operators are checked in green color, the pipeline is complete. Related logs can be seen in the webpage (by clicking on individual operators).