Clara Holoscan Deploy 0.7.4
0.7.4

Getting Started

The Clara Deploy SDK has the following system requirements:

  • Ubuntu Linux 16.04 or 18.04

  • NVIDIA CUDA Driver 410.48 or higher

    • Installation of CUDA Toolkit would make both CUDA and NVIDIA Display Drivers available

  • NVIDIA GPU is Pascal or newer

  • Kubernetes

  • Docker

  • NVIDIA Docker

  • Helm

  • More than 30GB of available disk space

  • clara-platform/ : Helm Chart Deployment Configuration of the Clara Deploy SDK

    • Chart.yaml : Basic configuration for the Helm chart

    • values.yaml : Declares variables passed into the templates

    • files/ : Configuration files for components in the Clara Deploy SDK

      • dicom-server-config.yaml : Configuration for the DICOM Adapter

    • templates/ : Templates that create valid Kubernetes manifest files

      • deployment.yaml : The template for the deployment of the Clara Deploy SDK

      • config-map.yaml : The template for the configurable portion of the DICOM Adapter

      • _helpers.tpl : Can be used to have default fully qualified application names

      • NOTES.txt : Usage notes displayed in the console after successful deployment

      • server-account.yaml : Sets up a service account for cluster administration

      • service.yaml : A manifest for creating a service endpoint for the deployment

      • volume-claim.yaml : Specifies the persistent volumes to be used

      • volume.yaml : Specifies volumes necessary for deployment of the Clara Deploy SDK

  • clara-reference-app/ : Example Applications. For details, see the Application Development Guide Section.

  • clara-reference-workflow/ : Example Workflows. For details, see the Workflow Development Guide Section.

  • gems/ : Containers needed for the Clara Deploy SDK

    • clara-ai-livertumor-0.1.9.tar : AI Docker Image

    • clara-ai-vnet-0.1.9.tar : AI v-net Docker Image

    • clara-clara-dashboard-0.1.9.tar : Clara Dashboard Docker Image

    • clara-core-0.1.9.tar : Clara Core Docker Image

    • clara-dicom-reader-0.1.9.tar : DICOM Reader Docker Image

    • clara-dicomserver-0.1.9.tar : DICOM Adapter Docker Image

    • clara-dicom-writer-0.1.9.tar : DICOM Writer Docker Image

    • clara-renderserver_linux-x86-0.1.9.tar : RenderServer Docker Image

    • clara-trtis-0.1.9.tar : NVIDIA TRTIS Docker Image

  • html : Clara Deploy SDK Manual in HTML format

  • scripts/ : Deployment scripts and Configuration Files Needed for Deployment

    • azure.yaml : Configuration specific to Azure deployment

    • clara-helm-config.yaml : Configuration file for settings to overwrite for deployment

    • deploy.sh : Script that runs deployment to the local server

    • deployazure.sh : Script that runs deployment to Azure

    • helm-rbac.yaml : Role based authentication configuration for helm deployment

    • install-prereqs.sh : Script to install all prerequisites needed to run the Clara Deploy SDK

    • kube-flannel.yml : Contains the flannel configuration data for Kubernetes

    • kube-flannel-rbac.yml : Configuration for role based authentication between flannel and Kubernetes

    • nvidia-device-plugin.yml : Configuration for allocating GPUs to nodes in Kubernetes

    • rbac-config.yaml : Role Based Account Credential configuration file for deployment

    • uninstall-prereqs.sh : Script to uninstall all the prerequisites needed to run the Clara Deploy SDK

  • test-data/ : Test Datasets

    • ct-abdominal-study-1 : Contains the default input data for CT abdominal scan

    • ct-wb-viz : Contains the default dataset for the RenderServer

    • image-quality-testdata : Contains all the test datasets for validation

    • models : Models used by the workflows

  • README.pdf : Clara Deploy SDK Manual

  1. To install the required prerequisites, Kubernetes, Docker, NVIDIA Container Runtime for Docker, and Helm, run install-prereqs.sh with the following commands:

    Copy
    Copied!
                

    cd scripts sudo install-prereqs.sh

  2. Restart the system.

    Note

    If your system does not have NVIDIA CUDA Toolkit installed, you are provided with a link to install it.

  3. Deploy Clara Deploy SDK with the following command. Output is similar to the output shown below.

    Copy
    Copied!
                

    $ sudo ./scripts/deploy.sh 2019-02-28 16:12:35 [INFO]: Installing Clara Deploy SDK v0.1.9... 2019-02-28 16:12:35 [INFO]: Loading Clara Containers ... 2019-02-28 16:14:37 [INFO]: Clara Containers loaded successfully 2019-02-28 16:14:37 [INFO]: Starting Clara Core and DICOM Adapter... ... 2019-02-28 16:14:38 [INFO]: Clara Deploy SDK v0.1.9 installed and started successfully.

    Note

    Obtain the application URL with the following commands:

    Copy
    Copied!
                

    kubectl get svc --namespace default [POD_NAME] echo http://SERVICE_IP:104

    It make take a few minutes for the LoadBalancer IP to be available. You can monitor status with the following command:

    Copy
    Copied!
                

    kubectl get svc -w [POD_NAME]

At this point, Clara is ready to support running a workflow. Please see the Workflow Development Guide Section for details on creating and running a workflow.

Helpful Commands

To get the name of the pods running on the system:

Copy
Copied!
            

$ sudo kubectl get pods

The Clara pod will be the one with clara-platform in the name.

To view the status of the service:

Copy
Copied!
            

$ sudo kubectl get svc -w [POD_NAME]

where the POD_NAME is acquired from the final output of the deploy.sh script or the get pods command.

To view the pod logs:

Copy
Copied!
            

$ sudo kubectl get po [POD_NAME] $ sudo kubectl describe po [POD_NAME]

where the POD_NAME is acquired from the final output of the deploy.sh script or the get pods command.

To view the container logs of the containers in the pod:

Copy
Copied!
            

$ sudo kubectl logs [POD_NAME] dicom-server $ sudo kubectl logs [POD_NAME] clara-core

where the POD_NAME is acquired from the final output of the deploy.sh script or the get pods command. Also, please note, it is not unexpected to get no logs from the clara-core container since it pushes its output to the dicom-server container.

To copy data to the containers:

Copy
Copied!
            

$ sudo kubectl cp <path_to_local> [POD_NAME]:<path_in_container> -c dicom-server $ sudo kubectl cp <path_to_local> [POD_NAME]:<path_in_container> -c clara-core

where the POD_NAME is acquired from the final output of the deploy.sh script or the get pods command.

To log into the containers:

Copy
Copied!
            

$ sudo kubectl exec -it [POD_NAME] --container dicom-server -- /bin/sh $ sudo kubectl exec -it [POD_NAME] --container clara-core -- /bin/sh

where the POD_NAME is acquired from the final output of the deploy.sh script or the get pods command.

To get the name of the deployment

Copy
Copied!
            

helm ls

To remove the deployment of the Clara Deploy SDK

Copy
Copied!
            

helm delete [HELM_DEPLOYMENT]

where HELM_DEPLOYMENT is everything in the POD_NAME up to, but not including -clara-platform*.

To redeploy Clara:

Copy
Copied!
            

$ sudo helm install ./clara-platform -f ./scripts/clara-helm-config.yaml


To Upgrade/Restart Clara

Use the following steps to upgrade and restart Clara.

  1. Stop Clara with the following command:

    Copy
    Copied!
                

    sudo helm delete clara --purge

  2. Monitor the pods running on the system with the following command until it indicates that Clara has terminated:

    Copy
    Copied!
                

    watch -n 2 kubectl get pods

    The watch command runs kubectl get pods every 2 seconds. After Clara has terminated, exit the command by typing CTRL-C.

  3. To redeploy Clara, change to the sdk/deploy/scripts/ directory and run the following script:

    Copy
    Copied!
                

    ./deploy.sh

  4. Run the following command from the clara directory:

    Copy
    Copied!
                

    sudo helm install ./clara-platform -f ./scripts/clara-helm-config.yaml -n clara

Common Issues & Workarounds

  1. If issues happen with the deployment, it could be that the expected requirements are not being met. The first thing to try is the uninstall-prereqs.sh script followed by the install-prereqs script. This can fix a lot of configuration issues.

Copy
Copied!
            

$ sudo ./scripts/uninstall-prereqs.sh $ sudo ./scripts/install-prereqs.sh

Warning

IMPORTANT: uninstall-prereqs.sh removes Kubernetes, Docker, NVIDIA-Docker and all files and directories under /clara-io.

  1. CUDA Issues : If issues happen updating the CUDA driver to the latest, it may be necessary to completely uninstall the current version of the driver:

Copy
Copied!
            

$ sudo rm /etc/apt/sources.list.d/cuda* $ sudo apt remove nvidia-cuda-toolkit $ sudo apt remove nvidia-* $ sudo rm /etc/apt/preferences.d/nvidia $ sudo apt update $ sudo apt upgrade $ sudo apt-cache policy cuda $ sudo rm /etc/apt/preferences.d/nvidia

After cleaning up the current driver, reboot the server and attempt to install the latest driver again.

Researchers and data scientists who might not have access to a powerful GPU server can still easily get started with Clara Deploy SDK without needing to become a Docker and Kubernetes expert. The tested environment in Azure is creating a VM in the Azure environment that meets the Azure hardware specification.

Azure Test Hardware Configuration

We have tested Clara on the below Azure VM configuration.

  • Location : West US2

  • Operating System: Ubuntu 18.04

  • Size : Standard NC6s_v2 (6 vcpus, 112 GB memory, 1 GPU-NVIDIA Tesla P100)

  • OS Disk Size : Premium SSD, 30GB (mounted on root)

  • Temporary storage : 736GB (mounted on /mnt)

  • Ports Open : SSH, HTTP, HTTPS

Azure VM Configuration

If Clara is being deployed on an Azure VM, setup up a DICOM listener on another terminal in the VM:

Copy
Copied!
            

sudo apt-get install -y dcmtk sudo storescp -v -aet ORTHANC 1004

For the Azure VM, configure the settings of the DICOM Adapter components by editing the file /mnt/clara-sdk/clara-platform/files/dicom-server-config.yaml and replace all occurrences of ORTHANC’s host-ip: 10.110.46.25 by host-ip: <Private IP address of the Azure VM>. The Private IP address of the Azure VM can be retrieved by the following command :

Copy
Copied!
            

sudo kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'

At this point, deploying on the VM is the same as the typical deployment steps. Please see the Deployment Section for further details.

Common Issues & Workarounds

Low Disk Space Not Enough To Hold Clara Container Images

Due to Kubelet’s garbage collection feature,

Kubelet(‘node agent’ that runs on each node of k8s cluster) will perform garbage collection for containers every minute and garbage collection for images every five minutes.

Once the percent of disk usage is over the max threshold (default: 85%), Kubelet will free (remove) images enough to place us at the lower threshold (default: 80%).

Note

User needs to make sure that the percent of disk usage in VM is lower than 85% so that necessary images for Clara Deploy SDK wouldn’t be deleted locally.

If disk space is insufficient to hold Docker images, you may see error messages such as pull access denied for clara/XXXX, repository does not exist or may require 'docker login'.

You can change the default Docker directory to another location with sufficient space.

Copy
Copied!
            

sudo sed -i 's#ExecStart=/usr/bin/dockerd -H fd://#ExecStart=/usr/bin/dockerd -g /{new-path}/docker -H fd://#g' /lib/systemd/system/docker.service #Change path of default Docker directory in docker.service file. sudo systemctl stop docker #Stop Docker deamon. ps aux | grep -i docker | grep -v grep #You can ascertain that Docker has stopped if you see a blank output. sudo systemctl daemon-reload #Reload the Docker daemon. sudo chmod 777 /{new-path} #Change permissions of the {new-path} directory. sudo mkdir /{new-path}/docker #Create a new Docker directory. sudo rsync -aqxP /var/lib/docker/ /{new-path}/docker #Move contents of default Docker directory to /{new-path}/docker sudo systemctl start docker #Start Docker service ps aux | grep -i docker | grep -v grep #Verify that Docker is running on /{new-path}/docker.

Then, please redeploy Clara with reloading docker images. (See `Redeploying Clara`_ section.)

Researchers and data scientists who might not have access to a powerful GPU server can still easily get started with Clara Deploy SDK without needing to become a Docker and Kubernetes expert. The tested environment on GCP is creating a VM in the GCP environment that meets the GCP hardware specification.

GCP Test Hardware Configuration

We have tested Clara on the below GCP VM configuration.

  • Location : Region: us-central1 (Iowa)

    • Region: us-central1 (Iowa)

    • Zone: us-central1-c

  • Operating System : Ubuntu 18.04 LTS

  • Machine type: 8vCPU, 32GB, 1 GPU (NVIDIA Tesla P4), Turn on display device

  • Disk Size: SSD 100GB

  • Ports Open : SSH, HTTP, HTTPS

GCP VM Configuration

If Clara is being deployed on an GCP VM, setup up a DICOM listener on another terminal in the VM:

Copy
Copied!
            

sudo apt-get install -y dcmtk sudo storescp -v -aet ORTHANC 1004

For the GCP VM, configure the settings of the DICOM Adapter components by editing the file {clara-sdk-path}/clara-platform/files/dicom-server-config.yaml and replace all occurrences of ORTHANC’s host-ip: 10.110.46.25 by host-ip: <Private IP address of the GCP VM>. The Private IP address of the GCP VM can be retrieved by the following command :

Copy
Copied!
            

sudo kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'

At this point, deploying on the VM is the same as the typical deployment steps. Please see the Deployment Section for further details.

Researchers and data scientists who might not have access to a powerful GPU server can still easily get started with Clara Deploy SDK without needing to become a Docker and Kubernetes expert. The tested environment on AWS is creating a VM in the AWS environment that meets the AWS hardware specification.

AWS Test Hardware Configuration

We have tested Clara on the below AWS VM configuration.

  • Location : US East (Ohio)

  • Operating System : Ubuntu 18.04

  • Amazon machine image : Ubuntu Server 18.04 LTS (HVM), SSD Volume Type (64-bit)

  • Instance type : g3.4xlarge (16 vcpus, 122 GB memory, NVIDIA Tesla M60 GPU)

  • Storage: General Purpose SSD (100 GB)

  • Ports Open : SSH, HTTP, HTTPS

AWS VM Configuration

If Clara is being deployed on an AWS VM, setup up a DICOM listener on another terminal in the VM:

Copy
Copied!
            

sudo apt-get install -y dcmtk sudo storescp -v -aet ORTHANC 1004

For the AWS VM, configure the settings of the DICOM Adapter components by editing the file {clara-sdk-path}/clara-platform/files/dicom-server-config.yaml and replace all occurrences of ORTHANC’s host-ip: 10.110.46.25 by host-ip: <Private IP address of the AWS VM>. The Private IP address of the AWS VM can be retrieved by the following command :

Copy
Copied!
            

sudo kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'

At this point, deploying on the VM is the same as the typical deployment steps. Please see the Deployment Section for further details.

Once you have deployed Clara Deploy SDK, use the procedures in this section to run the reference workflow, as a demonstration and a verification of correct deployment.

  • Make sure TensorRT Inference Server (TRTIS) is running correctly

  • Test the Render Server with provided data

  • Set up the demonstration, using command-line tools or Orthanc and OHIF Viewer

To Ensure TRTIS is running correctly

Use the following steps to ensure that TRTIS is running correctly.

  1. Obtain the TRTIS IP address (the cluster IP address of clara-clara-platform) with the following command:

    Copy
    Copied!
                

    kubectl get services clara-clara-platform

  2. Check the status of TRTIS with the following command:

    Copy
    Copied!
                

    curl http://<TRTIS_IP_address>:8000/api/status

To Test the Render Server with Provided Test Data

Use the following steps to test the Render Server.

  1. Copy the test data with the following command:

    Copy
    Copied!
                

    sudo cp -r sdk/deploy/test-data/ct-wb-viz /clara-io/datasets/ExposureRenderer

  2. Connect to the web application at http://localhost:8080.

  3. On the Render Server tab (Clara, Render Server), click the ct_wb_viz folder in the data column.

  4. Click the selected preset to default_tf_wb.json.

  5. Manipulate the data:

    1. Position the cursor on the rendering space and roll the mouse wheel backward to zoom in on the rendered volume.

    2. Position the cursor on the rendering space and roll the mouse wheel forward to zoom out from the rendered volume.

    3. Left-click and drag to the left to rotate the rendered volume clockwise.

    4. Left-click and drag to the right to rotate the rendered volume counter-clockwise.

  6. Show one or more labels of the body:

    1. Select the Volume tab.

    2. Select the Visibility tab.

  7. Uncheck some of the labels and observe the changes that result.

To Set Up a Full Demonstration

  1. Copy the reference models to the Clara working directory with the following command:

    Copy
    Copied!
                

    sudo cp -r <full path>clara/sdk/deploy/test-data/models/* /clara-io/models/

  2. Publish the reference workflows with the following commands:

    Copy
    Copied!
                

    cd clara-reference-workflow sudo ./clara-wf publish_chart 1db65f99-c9b7-4329-ab9c-d519e0557638 "CT Organ seg" /clara-io/clara-core/workflows/ sudo ./clara-wf publish fd3ee8bf-b9f3-4808-bd60-243f870ff9bd "LiverSeg" /clara-io/clara-core/workflows/

  3. Change the IP address for ORTHANC in clara-platform/files/dicom-server-config.yaml by changing host-ip to your current m/c IP address in the &lt;orthanc_ip&gt; locations shown below:

    Copy
    Copied!
                

    dicom: scp: port: 104 ae-titles: - ae-title: OrganSeg processors: - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomAdapter" - ae-title: LiverSeg processors: - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomAdapter" max-associations: 2 verification: enabled: true transfer-syntaxes: - "1.2.840.10008.1.2" #Implicit VR Little Endian - "1.2.840.10008.1.2.1" #Explicit VR Little Endian - "1.2.840.10008.1.2.2" #Explicit VR Big Endian log-dimse-datasets: false reject-unknown-sources: true sources: - host-ip: <orthanc_ip> ae-title: ORTHANC scu: ae-title: ClaraSCU max-associations: 2 destinations: - name: MYPACS host-ip: port: 1004 ae-title: ORTHANC

  4. Change source ae-title to change source ae-title to ORTHANC.

  5. Change port to 4242.

To Restart Clara

Use the following steps to upgrade and restart Clara.

  1. Stop Clara with the following command:

    Copy
    Copied!
                

    sudo helm delete clara --purge

  2. Monitor the pods running on the system with the following command until it indicates that Clara has terminated:

    Copy
    Copied!
                

    watch -n 2 kubectl get pods

    The watch command runs kubectl get pods every 2 seconds. After Clara has terminated, exit the command by typing CTRL-C.

  3. Run the following command from the clara directory:

    Copy
    Copied!
                

    sudo helm install ./clara-platform -f ./scripts/clara-helm-config.yaml -n clara

To Run the Demonstration

Use the procedures in this section to send and receive DICOM data, either via the command line or with OHIF viewer and Orthanc.

To Run the Demonstration with Command Line Tools

  1. Install the dcmtk package with the following command:

    Copy
    Copied!
                

    sudo apt-get install dcmtk

  2. Open a new terminal window where you can run a thread to receive DICOM data.

  3. In the new terminal window, create a DICOM destination directory and change to the new directory with the following commands:

    Copy
    Copied!
                

    mkdir <DICOM destination folder> cd <DICOM destination folder>

  4. Set up Orthanc to listen on port 4242 with the following command:

    Copy
    Copied!
                

    sudo storescp -v --fork -aet ORTHANC 4242

  5. Send DICOM data to trigger one of the workflows with one of the following commands:

    Copy
    Copied!
                

    storescu -v +sd +r -xb -v -aet "DCM4CHEE" -aec "OrganSeg" <ip_address> <folder_with_your_DICOM_images>/DICOM_anon/ storescu -v +sd +r -xb -v -aet "DCM4CHEE" -aec "LiverSeg" <ip_address> 104 <folder_with_your_DICOM_images>/DICOM_anon/

    Where <ip_address> is your IP address, and <username> is the name of your home directory.

  6. Inspect the output in <DICOM destination folder>.

  7. Before running the other workflow, delete any data in <DICOM destination folder>.

  8. Send DICOM data to trigger the other workflow with the other command in step 5.

To Run the Demonstration with Orthanc and OHIF Viewer

  1. Install and run Orthanc in a Docker container.

  2. Print a JSON configuration with tthe following command:

    Copy
    Copied!
                

    docker run --rm --entrypoint=cat jodogne/orthanc /etc/orthanc/orthanc.json > <yourLocalFolder4othenac>/orthanc.json

  3. Edit orthanc.json to add the 2 lines below to the DicomModalities section, after the commented clearcanvas example:

    Copy
    Copied!
                

    // "clearcanvas" : [ "CLEARCANVAS", "192.168.1.1", 104, "ClearCanvas" ]: "clara-liver" : [ "LiverSeg", "yourIPaddress", 104 ], "clara-ctseg" : [ "OrganSeg", "yourIPaddress", 104 ]

  4. Start Orthanc with the following command:

    Copy
    Copied!
                

    docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v <yourLocalFolder4othenac>/orthanc.json:/etc/orthanc/orthanc.json -v <yourLocalFolder4othenac>/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins /etc/orthanc --verbose

  5. Load http://localhost:8042 in a web browser.

  6. Upload a couple of DICOM studies of abdomen CT scans.

  7. In patient study series, select the Send to DICOM modality, then select clara-ctseg or clara-liverseg.

The commands above are from http://book.orthanc-server.com/users/docker.html. See that web site for more information.

Clara Deploy SDK includes a set of medical images that can be used for testing purposes. These medical images can be found in the test-data directory.

There are many containers that are bundled together to create the Clara Deploy SDK.

AI

The AI container provides multi-organ segmentation on abdominal CT with [dense v-networks](https://github.com/NifTK/NiftyNetModelZoo/blob/master/dense_vnet_abdominal_ct_model_zoo.md). This container requires [TRTIS](https://ngc.nvidia.com/catalog/containers/nvidia%2Ftensorrtserver). Details about the implementation can be found in the following table:

Host

Container

Note

5002

5002

Port

/nv-clara/ai/input/

/ai/in/

Core

The Clara Core container manages the workflow and spinning up the necessary containers needed by the workflow.

DICOM Reader

The DICOM Read container is the first container in a workflow. It reads DICOM series from the DICOM Adapter and outputs MHD file, one MHD file for each series.

Host

Container

Note

50051

50051

Port

/dicom-reader/input

/dicom-reader/output

DICOM Adapter

The DICOM Adapter container will handle the DICOM input. Typically, this will come from a PACs server.

DICOM Writer

The DICOM Writer container is the last container in a workflow. It reads MHD files from the previous container(s) and outputs DICOM files. One or more DICOM files is created for each MHD file.

Host

Container

Note

50051

50051

Port

/dicom-writer/input

/dicom-writer/output

/dicom-reader/input

TRTIS (TensorRT Inference Server)

The TRTIS container is required by the AI container. The TRTIS container runs an inference server.

Host

Container

Note

8000/8001/8002

8000/8001/8002

Ports

/models

Prebuilt model folder (dense v-net)

New Features 0.1.7

  • The Render Server is now enabled in the dashboard.

New Features 0.1.6

  • The installation of pre-requisites no longer deletes the current Docker configuration.

  • Deployment of Clara Deploy SDK via Helm Charts and Kubernetes.

  • WorkFlow Client API provides integration for containers that need to be part of a workflow. The WorkFlow Client API supports: * Publish Study * Send to TRTIS

  • DICOM Adapter provides an integration point for a service such as a PACS server. It reads and writes DICOM data and pushes it to Clara Core to get a workflow started.

  • Clara Core provides handling of the workflow. Clara Core supports running a single workflow at a time. New workflows require a new deployment. Clara Core supports: * TRTIS as a service

  • A reference application is available to describe how to run in Clara Deploy SDK.

Deprecated

  • We no longer support running workflows with docker-compose.

  • We no longer support the Inference Client API.

The following are known issues in this release of the Clara Deploy SDK

Core

  • One WorkFlow At a Time

    • Clara Core supports running one workflow at a time.

    • Workaround: For each deployment, perform a single deployment.

Clara Dashboard

  • The displayed job ID might differ from the associated payload folder on the server.

AI

  • Currently only abdominal segmentation and liver segmentation are supported.

    • In cases where the reconstructed volume covers more regions than just the abdomen. Settings can be used to specify the location of abdomen. See the readme file for settings descriptions.

    • Cases that contain regions smaller than abdomen are not tested.

    • Cases that are not an abdominal scan may contain incorrect or incomplete results. The current model is trained for abdominal scans.

  • AI model in the container is not trained across a range of patient scans and is subject to incorrect organ labels.

  • Input volume to AI container is first downsampled to a fixed size. Labels are generated on downsampled volume. Nearest neighbor approach is used to upsample the segmented mask (to match original reconstructed volume dimensions). This upsampling process results in extreme staircase artifacts in the segmented masks.

  • AI segmentation container outputs masks in MetaHeader format only.

We are interested in your feedback and any bugs you may find while using the Clara Deploy SDK.

Copyright © 2018-2019 NVIDIA Corporation

© Copyright 2018-2019, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.