Clara Holoscan Deploy 0.7.4
0.7.4

9.6. Clara Deploy SDK VNet Segmentation Operator

The Clara Deploy SDK VNet Segmentation operator performs segmentation and labeling of organs in a CT abdominal reconstructed volume. This application uses the NVIDIA TensorRT Inference Server (TRTIS), which is hosted as a service on the Clara Deploy SDK. To use the TRTIS inference API, the application requires the TRTIS Python API client package, which is installed in the operator.

The VNet Segmentation Operator accepts image files in MHD format. CT abdominal reconstructed images are fed into VNet Segmentation Operator algorithm as an MHD volume.

The operator outputs a segmented mask with labeled organs in MHD image format.

The following parameters are supported:

  • ROI: A region of interest in terms of X,Y,Z pixel locations (x1,x2,y1,y2,z1,z2).

  • Pre Axis Codes: Three-character axis codes (e.g. LPS or RAS) for transposing the 3D matrix after loading the image. The default value is ‘PRS’

  • Post Axis Codes: Three-character axis codes (e.g. LPS or RAS) for transposing the 3D matrix before writing the output. The default value is the original axis codes from the input image.

The VNet segmentation Operator depends on the TRTIS server for inference and TRTIS client for making inference calls to the Server. The TRTIS server must be running for the operator to execute.

9.6.6.1. Prerequisites

  1. Ensure the Docker image of TRTIS has been imported into the local Docker repository with the following command:

    Copy
    Copied!
                

    docker images

  2. Look for the image name TRTIS and the correct tag for the release (e.g. 20.07-v1-py3).

  3. Download both the input dataset and the trained model from the MODEL SCRIPTS section for this container on NGC, following the steps in the Setup section.

9.6.6.2. Step 1

Change to your working directory (e.g. test_vnet).

9.6.6.3. Step 2

Create, if they do not exist, the following directories under your working directory:

  • input containing the input image file

  • output for the segmentation output

  • publish for publishing data for the Render Server

  • logs for the log files

  • models containing models copied from the v_net folder

9.6.6.4. Step 3

In your working directory, create a shell script (e.g. run_vnet.sh or another name if you prefer), copy the sample content below, and save it.

Copy
Copied!
            

#!/bin/bash # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # NVIDIA CORPORATION and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA CORPORATION is strictly prohibited. # Define the name of the app (aka operator), assumed the same as the project folder name APP_NAME="app_vnet" # Define the TenseorRT Inference Server Docker image, which will be used for testing # Use either local repo or NVIDIA repo TRTIS_IMAGE="nvcr.io/nvidia/tritonserver:20.07-v1-py3" # Launch the container with the following environment variables # to provide runtime information. export NVIDIA_CLARA_TRTISURI="localhost:8000" # Define the model name for use when launching TRTIS with only the specific model MODEL_NAME="v_net" # Create a Docker network so that containers can communicate on this network NETWORK_NAME="container-demo" # Create network docker network create ${NETWORK_NAME} # Run TRTIS(name: trtis), maping ./models/${MODEL_NAME} to /models/${MODEL_NAME} # (localhost:8000 will be used) nvidia-docker run --name trtis --network ${NETWORK_NAME} -d --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \ -p 8000:8000 \ -v $(pwd)/models/${MODEL_NAME}:/models/${MODEL_NAME} ${TRTIS_IMAGE} \ tritonserver --model-repository=/models # Wait until TRTIS is ready trtis_local_uri=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' trtis) echo -n "Wait until TRTIS${trtis_local_uri}is ready..." while [ $(curl -s ${trtis_local_uri}:8000/api/status | grep -c SERVER_READY) -eq 0 ]; do sleep 1 echo -n "." done echo "done" export NVIDIA_CLARA_TRTISURI="${trtis_local_uri}:8000" # Run ${APP_NAME} container. # Launch the app container with the following environment variables internally, # to provide input/output path information. docker run --name ${APP_NAME} --network ${NETWORK_NAME} -it --rm \ -v $(pwd)/input:/input \ -v $(pwd)/output:/output \ -v $(pwd)/logs:/logs \ -v $(pwd)/publish:/publish \ -e NVIDIA_CLARA_TRTISURI \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ -e vnet_seg_indir=/input \ -e vnet_seg_outdir=/output \ -e vnet_seg_roi=88,440,53,465,61,142 \ ${APP_NAME} echo "${APP_NAME}is done." # Stop TRTIS container echo "Stopping TRTIS" docker stop trtis > /dev/null # Remove network docker network remove ${NETWORK_NAME} > /dev/null


9.6.6.5. Step 4

Execute the script below and wait for the application container to finish:

Copy
Copied!
            

./run_vnet.sh


9.6.6.6. Step 5

Check for segmentation results in the output directory.

© Copyright 2018-2021, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.