2. Installation

This page outlines how to install or update Clara Deploy SDK on your system.

2.1. System Requirements

As of version 0.5.0, the Clara Deploy SDK has been tested with the following system requirements:

  • Ubuntu Linux 18.04 LTS

  • We use the NVIDIA Triton Inference Server 1.5.0 (Container Image tag 19.08). Release 1.5.0 is based on CUDA 10.2, which requires NVIDIA driver release 440.33. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384.111+ or 410. Support matrix for TensorRT Inference Sever: https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html.

    • Installation of CUDA Toolkit would make both CUDA and NVIDIA Display Drivers available

    • Due to AWS kernel update 5.3.0-1017-aws, nvidia-smi will fail with .. code-block:: guess

      NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver

      This issue affects AWS, Azure and GCP, and it is recommended that the NVIDIA driver is installed using CUDA 10.2 deb package.

  • NVIDIA GPU is Pascal or newer, including Pascal, Volta, and Turing families

  • Kubernetes 1.15.4

  • Docker 19.03.1

  • NVIDIA Docker 2.2.0

  • Docker configured with nvidia as the default runtime (Prerequisite of NVIDIA device plugin for k8s)

  • Helm 2.15.2

  • At least 30GB of available disk space

It is recommended that the user uses the bootstrap script to install the dependencies for Clara Deploy SDK.

The Clara installation workflow is outlined in the diagram below:

width:800 align:center

2.2. Steps to Install

  1. Download the bootstrap.zip file from NGC.

    1. Log in to NGC.
    2. Select the appropriate org and team.
    3. Navigate to Model Scripts.
    4. Find and select Clara Bootstrap.
    5. Go to the File Browser section and download the latest version of the bootstrap.zip file.
    6. Unzip the bootstrap.zip file:
    # Unzip
    unzip bootstrap.zip -d bootstrap
  2. Skip this step if you are performing a new install of Clara Deploy SDK. Before upgrading a previous install of Clara Deploy SDK, run the uninstall_prereq.sh script:

    # Run the script
    sudo ./uninstall_prereq.sh
    1. Use the following command to check that all Clara pods have been terminated:
    kubectl get pods


    We do not recommend manually deleting Clara Deploy components; the uninstall_prereq.sh script will delete all necessary binaries and stop all Clara Deploy containers.

  3. Install all prerequisites required to run Clara Deploy SDK using the bootstrap.sh script:

    # Run the script
    sudo ./bootstrap.sh

    This will install the following required prerequisites:

    • Docker
    • NVIDIA Container Runtime for Docker
    • Kubernetes (including kubectl)
    • Helm


    If your system does not have NVIDIA CUDA Toolkit installed, you will be provided with a link to install it.

  4. Install Clara CLI.

    1. Log in to NGC.
    2. Select the appropriate org and team.
    3. Navigate to Model Scripts.
    4. Find and select Clara CLI.
    5. Go to the File Browser section and download the latest version of the cli.zip file.
    6. Extract the binaries into /usr/bin/ using following command. You should see output similar to that shown below:
    sudo unzip cli.zip -d /usr/bin/ && sudo chmod 755 /usr/bin/clara*
    Archive:  cli.zip
    inflating: /usr/bin/clara
    inflating: /usr/bin/clara-dicom
    inflating: /usr/bin/clara-monitor
    inflating: /usr/bin/clara-platform
    inflating: /usr/bin/clara-pull
    inflating: /usr/bin/clara-render
    1. Verify that clara CLI has been successfully installed:
    clara version
    1. Generate clara completion script for Bash
    sudo bash -c 'clara completion bash > /etc/bash_completion.d/clara-completion.bash' && exec bash
  5. Configure Clara CLI to use your NGC credentials:

    If –orgteam is not specified, the orgteam defaults to ‘nvidia/clara’ which points to the publicly available Clara NGC artifacts.

    clara config --key NGC_API_KEY [--orgteam NGC_ORG/NGC_TEAM] [-y|--yes]
  6. Pull and deploy Clara Platform with the following steps:

    1. Pull the latest Clara Platform helm charts:
    clara pull platform
    1. Start the Clara Deploy SDK:
    clara platform start
  7. Pull and deploy Clara Deploy services.

    1. Pull the latest Clara Deploy services helm charts:
    clara pull dicom
    clara pull render
    clara pull monitor
    clara pull console
    1. Start the Clara Deploy SDK:
    clara dicom start
    clara render start
    clara monitor start
    clara console start


The console is not included in the current version of Clara Deploy, so the “console” commands will safely fail.

2.3. Verify Installation

The deploy script will automatically start the following components:

  • The Clara Platform
  • The DICOM Adapter
  • The Render Server
  • The Monitor Server

To verify that the installation is successful, run the following command:

helm ls

The following three helm charts should be returned:

  • clara
  • clara-dicom-adapter
  • clara-monitor-server
  • clara-render-server
  • clara-console

To verify that the helm charts are up and running, run the following command:

kubectl get pods

The command should return pods with the below prefixes:

  • clara-clara-platformapiserver-
  • clara-dicom-adapter-
  • clara-monitor-server-fluentd-elasticsearch-
  • clara-monitor-server-grafana-
  • clara-monitor-server-monitor-server-
  • clara-render-server-clara-renderer-
  • clara-resultsservice-
  • clara-ui-
  • clara-console-
  • clara-console-mongodb-
  • clara-workflow-controller-
  • elasticsearch-master-0
  • elasticsearch-master-1

They should be all in a Running state

2.4. Installation on a Cloud Service Provider (CSP)

Researchers and data scientists who don’t have access to a GPU server can still get started with Clara Deploy SDK without needing to become a Docker and Kubernetes expert. Clara Deploy SDK has been validated on the following services:

  • Amazon Web Services (AWS)
  • Microsoft Azure Cloud Services (Azure)
  • Google Cloud Platform Services (GCP)

The following subsections describe the configuration for each CSP. Once the VM is provisioned according to the documentation, you can follow the Steps to Install section above to install Clara Deploy SDK.

2.4.1. AWS Virtual Machine Configuration

The AWS VM configuration used for testing can be found below:

  • Location : US East (Ohio)
  • Operating System : Ubuntu 18.04
  • Amazon machine image : Ubuntu Server 18.04 LTS (HVM), SSD Volume Type (64-bit)
  • Instance type : p3.8xlarge (32 vcpus, 244 GB memory, 4 NVIDIA GPUs in the Pascal, Volta and Turing families)
  • Storage: General Purpose SSD (100 GB)
  • Ports Open : SSH, HTTP, HTTPS

2.4.2. Azure Virtual Machine Configuration

The Azure VM configuration used for testing can be found below:

  • Location : West US2
  • Operating System: Ubuntu 18.04
  • Size : Standard NC6s_v2 (6 vcpus, 112 GB memory, 1 GPU-NVIDIA Tesla P100)
  • OS Disk Size : Premium SSD, 300GB (mounted on root)
  • Ports Open : SSH, HTTP, HTTPS

2.4.3. GCP Virtual Machine Configuration

The GCP VM configuration used for testing can be found below:

  • Location :
    • Region: us-central1 (Iowa)
    • Zone: us-central1-c
  • Operating System : Ubuntu 18.04 LTS
  • Machine type: 8vCPU, 32GB, 1 GPU (NVIDIA Tesla P4), Turn on display device
  • Disk Size: SSD 100GB
  • Ports Open : SSH, HTTP, HTTPS