2. Installation

This page outlines how to install or update Clara Deploy SDK on your system.

  • As of version 0.8.0, Clara must have the following dependencies met:
    1. Operating System: Ubuntu 18.04 or 20.04
    2. NVIDIA Docker: 2.2.0
    3. Docker: Docker-ce (for Ubuntu 18.04), Docker.io (for Ubuntu 20.04)
    4. Kubernetes: 1.19
    5. Helm: 3.4
  • Additionally, it is important to note that because Clara does not provide authentication based security, it should be deployed in a secure environment which is able to restrict access to Clara to approved users and services.

All of the necessary package dependencies can be installed via our Ansible installer.

  • We also need CUDA version >= 11.1
  • NVIDIA Driver version to be at least 450.80.02.
  • For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.
    • Installation of CUDA Toolkit would make both CUDA and NVIDIA Display Drivers available
  • NVIDIA GPU is Pascal or newer, including Pascal, Volta, Turing and Ampere families
  • At least 30GB of available disk space

Clara is installed with a set of Ansible playbooks. This has the added benefit of being able to be deployed across multiple hosts at the same time.

2.2.1. Steps to Install

#. Download Ansible. #.

Download the ansible.zip file from NGC.

  1. Log in to NGC,
  2. Select the appropriate org and team.
  3. Navigate to Resources.
  4. Find and select Clara Deploy Ansible Installation.
  5. Go to the File Browser section and download the latest version of the ansible.zip file.
  6. Unzip the ansible.zip file:
Copy
Copied!
            

# Unzip ansible.zip -d ansible


  1. Configure your Ansible hosts details in the file playbooks/clara_hosts. See Ansible’s inventory documentation for more information on how to work with inventory files.

    Note

    You may choose to update /etc/ansible/hosts instead of updating this file and can leave off the -i flag from the commands below.

  2. Install Clara Prerequisites To install Clara prerequisites (Docker, NVIDIA Driver, NVIDIA Docker, Openshify Python Libraries, Kubernetes and Helm) run:

    Copy
    Copied!
                

    ansible-playbook -K install-clara-prereqs.yml -i clara_hosts

  3. Installing Clara Components To install basic Clara components run:

    Copy
    Copied!
                

    ansible-playbook -K install-clara.yml -i clara_hosts

    By default only platform and dicom are installed, however, the user may override the default variable clara_components as:

    Copy
    Copied!
                

    ansible-playbook -K install-clara.yml -i clara_hosts -e 'clara_components=["platform", "dicom", "render"]'

    or by updating the default value of clara_components in ansible/playbooks/roles/clara-components/default/main.yml.

  4. Install Clara CLI.

    1. Log in to NGC.
    2. Navigate to Resources either in Catalog (public user) or in Private Repository (remember to select your appropriate org and team).
    3. Find and select Clara Deploy CLI.
    4. Go to the File Browser section and download the latest version of the cli.zip file.
    5. Extract the binaries into /usr/bin/ using following command. You should see output similar to that shown below:
    Copy
    Copied!
                

    sudo unzip cli.zip -d /usr/bin/ && sudo chmod 755 /usr/bin/clara* Archive: cli.zip inflating: /usr/bin/clara inflating: /usr/bin/clara-dicom inflating: /usr/bin/clara-platform inflating: /usr/bin/clara-pull inflating: /usr/bin/clara-render

    1. Verify that clara CLI has been successfully installed:
    Copy
    Copied!
                

    clara version

    1. Generate clara completion script for Bash
    Copy
    Copied!
                

    sudo bash -c 'clara completion bash > /etc/bash_completion.d/clara-completion.bash' && exec bash

  5. Configure Clara CLI to use your NGC credentials:

    If –orgteam is not specified, the orgteam defaults to ‘nvidia/clara’ which points to the publicly available Clara NGC artifacts.

    Copy
    Copied!
                

    clara config --key NGC_API_KEY [--orgteam NGC_ORG/NGC_TEAM] [-y|--yes]

    Note

    The user must specifiy an --orgteam parameter if the Clara component (e.g. platform) resides in a private org/team within NGC. Failing to specify the private org/team will result in Kubernetes failing to pull the container image for the desired Clara component.

  6. Pull and deploy Clara Platform with the following steps:

    1. Pull the latest Clara Platform helm charts:
    Copy
    Copied!
                

    clara pull platform

    1. Start the Clara Deploy SDK:
    Copy
    Copied!
                

    clara platform start

  7. Pull and deploy Clara Deploy services.

    1. Pull the latest Clara Deploy services helm charts:
    Copy
    Copied!
                

    clara pull dicom clara pull render clara pull console

    1. Start the Clara Deploy SDK:
    Copy
    Copied!
                

    clara dicom start clara render start clara console start

Note

The console is not included in the current version of Clara Deploy, so the “console” commands will safely fail.


The deploy script will automatically start the following components:

  • The Clara Platform
  • The DICOM Adapter
  • The Render Server

To verify that the installation is successful, run the following command:

Copy
Copied!
            

helm ls

The following three helm charts should be returned:

  • clara
  • clara-dicom-adapter
  • clara-render-server
  • clara-console

To verify that the helm charts are up and running, run the following command:

Copy
Copied!
            

kubectl get pods

The command should return pods with the below prefixes:

  • clara-clara-platformapiserver-
  • clara-dicom-adapter-
  • clara-render-server-clara-renderer-
  • clara-resultsservice-
  • clara-ui-
  • clara-console-
  • clara-console-mongodb-
  • clara-workflow-controller-

They should be all in a Running state

Researchers and data scientists who don’t have access to a GPU server can still get started with Clara Deploy SDK without needing to become a Docker and Kubernetes expert. Clara Deploy SDK has been validated on the following services:

  • Amazon Web Services (AWS)
  • Microsoft Azure Cloud Services (Azure)
  • Google Cloud Platform Services (GCP)

The following subsections describe the configuration for each CSP. Once the VM is provisioned according to the documentation, you can follow the Steps to Install section above to install Clara Deploy SDK.

2.4.1. AWS Virtual Machine Configuration

The AWS VM configuration used for testing can be found below:

  • Location : US East (Ohio)
  • Operating System : Ubuntu 18.04
  • Amazon machine image : Ubuntu Server 18.04 LTS (HVM), SSD Volume Type (64-bit)
  • Instance type : p3.8xlarge (32 vcpus, 244 GB memory, 4 NVIDIA GPUs in the Pascal, Volta and Turing families)
  • Storage: General Purpose SSD (100 GB)
  • Ports Open : SSH, HTTP, HTTPS

2.4.2. Azure Virtual Machine Configuration

The Azure VM configuration used for testing can be found below:

  • Location : West US2
  • Operating System: Ubuntu 18.04
  • Size : Standard NC6s_v2 (6 vcpus, 112 GB memory, 1 GPU-NVIDIA Tesla P100)
  • OS Disk Size : Premium SSD, 300GB (mounted on root)
  • Ports Open : SSH, HTTP, HTTPS

2.4.3. GCP Virtual Machine Configuration

The GCP VM configuration used for testing can be found below:

  • Location :
    • Region: us-central1 (Iowa)
    • Zone: us-central1-c
  • Operating System : Ubuntu 18.04 LTS
  • Machine type: 8vCPU, 32GB, 1 GPU (NVIDIA Tesla P4), Turn on display device
  • Disk Size: SSD 100GB
  • Ports Open : SSH, HTTP, HTTPS

2.4.4. ESXi Installation

To run on an ESXi host machine, ensure the following settings are enabled to allow PCIe passthrough:

  • In the BIOS settings on the ESXI host, enable SR-IOV & Intel Virtualization Technology for Directed I/O (VT-d) or AMD I/O Virtualization Technology (IOMMU)
  • When in the ESXi console, when creating a new VM:
    • Under Customize Settings > Virtual Hardware, after selecting the desired amount of system memory, click the toggle to view all memory-related settings
      • Click the checkbox for Reserve all guest memory (All locked)
    • Under Customize Settings > VM Options, select Advanced, and then click Edit configuration.
    • Select EFI under boot options
    • To enable GPU Passthrough, while still under the Configuration Parameters of the VM Options tab add the following 2x key-value pairs:
      • key:pciPassthru.64bitMMIOSizeGB value“128”
      • key:pciPassthru.use64bitMMIO value:“TRUE”

This document is a guide to follow for installing Clara in a system protected by a firewall. It assumes that there are two systems, a staging system which has access to internet and a production system guarded by a firewall. This assumes that Docker is installed on the staging system. Please follow the following steps to make the container images needed for Clara platform to be available on the production system.

2.5.1. Login to staging system

Copy
Copied!
            

ssh <staging-system-username>@<staging-system-ip>


2.5.2. Log in to the DGX Container Registry.

Copy
Copied!
            

$ docker login nvcr.io Username: $oauthtoken Password: apikey

Type “$oauthtoken” exactly as shown for the Username. This is a special username that enables API key authentication. In place of apikey, paste in the API Key text that you obtained from the DGX website.

2.5.3. Download container images on staging system

Copy
Copied!
            

docker pull nvcr.io/nvidia/clara/platformapiserver:<clara-version> docker pull nvcr.io/nvidia/clara/model-sync-daemon:<clara-version> docker pull nvcr.io/nvidia/clara/podmanager:<clara-version> docker pull nvcr.io/nvidia/clara/resultsservice:<clara-version> docker pull nvcr.io/nvidia/clara/nodemonitor:<clara-version> docker pull nvcr.io/nvidia/tensorrtserver:20.02-py3 docker pull argoproj/argoui:v2.2.1 argoproj/workflow-controller:v2.2.1 docker pull fluent/fluentd-kubernetes-daemonset:v1.11.0-debian-elasticsearch7-1.0


2.5.4. Generate a tar.gz for container images on staging system

Copy
Copied!
            

docker image save -o clara-images.tar nvcr.io/nvidia/clara/platformapiserver:<clara-version> nvcr.io/nvidia/clara/model-sync-daemon:<clara-version> nvcr.io/nvidia/clara/podmanager:<clara-version> nvcr.io/nvidia/clara/resultsservice:<clara-version> nvcr.io/nvidia/clara/nodemonitor:<clara-version> nvcr.io/nvidia/tensorrtserver:20.02-py3 argoproj/argoui:v2.2.1 argoproj/workflow-controller:v2.2.1 fluent/fluentd-kubernetes-daemonset:v1.11.0-debian-elasticsearch7-1.0; gzip clara-images.tar

In addition to the list specified above, you can also save any images in use for Clara. These images generally have the following prefixes:

Copy
Copied!
            

nvcr.io. ngc.nvidia.com. nvidia.github.io.

You can get a list of any such images using the docker images command. For example:

Copy
Copied!
            

docker images | grep "nvcr.io."


2.5.5. Login to production system

Copy
Copied!
            

ssh <production-system-username>@<production-system-ip>


2.5.6. Transfer Images to production system

Copy
Copied!
            

scp <staging-system-username>@<staging-system-ip>:/<path-to-images-archive>/clara-images.tar.gz .


2.5.7. Load Images on production system

Copy
Copied!
            

docker load -i clara-images.tar.gz

After this is done, the minimum set of images needed for running Clara are available on the production system. So, Clara Platform can now be installed on the production system.

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved. Last updated on Jun 28, 2023.