3. Clara Administration

3.1. Configuration

3.1.1. Clara Deploy SDK Platform

Configuration of the Clara Deploy SDK is done via the values.yaml file in the Clara Deploy SDK Helm chart $HOME/.clara/charts/clara/values.yaml.

Some of the values defined in values.yaml are constants used throughout the Helm chart and are not user-configurable.

The following are user-configurable values:

# Configuration for the payload service in the Clara Deploy SDK server.
payloadService:

  # The path on the node running Clara Deploy SDK where payloads will be stored.
  # Every payload that is created by Clara Deploy SDK will exist as a directory inside
  # this path (e.g. "/clara/payloads/c780b2cb-d26b-4151-8e7a-b4fbfb876f69").
  hostVolumePath: "/clara/payloads"

  # The disk capacity to reserve for the payload volume.
  capacity: 10Gi

# Configuration for the common volume that is mounted (read-only) to all pipeline
# service containers.
#
# The path that is used to mount this volume in each of the deployed pipeline
# services is provided to those service containers via the
# NVIDIA_CLARA_SERVICE_DATA_PATH environment variable. For example, a TensorRT
# Inference Service may be defined in the pipeline using the following command:
#
#   --models=$NVIDIA_CLARA_SERVICE_DATA_PATH/models
#
# to specify that the models repository used by the service will be stored
# inside the 'models' subdirectory of this common volume. In the case of the
# default hostVolumePath of "/clara/common", the path provided for the models
# repository of the TRTIS container would then correspond to the host path of
# "/clara/common/models".
#
# Since this volume is mounted read-only by services, it is expected that the
# path to this volume already exists on the host and is populated with any
# required data for the services that use it (such as models used by a TRTIS
# service). If it does not exist at deployment time, it will be created and
# will be empty.
commonServiceVolume:

  # The path on the node running Clara Deploy SDK where the common volume will
  # be stored.
  hostVolumePath: "/clara/common"

  # The disk capacity to reserve for the common volume.
  capacity: 10Gi

# Configuration for the service volume used by the Clara Deploy SDK server.
#
# Each service that is deployed by Clara Deploy SDK using a 'volume' connection
# is provided with a persistent volume to which each of the volume connections
# are mounted. For example, this volume connection:
#
#   volume:
#   - name: VOlUME_PATH
#     path: /var/www
#
# Will provide a volume mount inside the service container that corresponds to
# the host path of (using the default hostVolumePath)
#
#   /clara/service-volumes/{serviceId}/var/www
#
# where {serviceId} is the unique ID generated by Clara Deploy SDK that was used
# to deploy the service.
serviceVolume:

  # The path on the node running Clara Deploy SDK where service volumes will be stored.
  # Every service that is deployed by Clara Deploy SDK will have a volume mounted in this
  # path (e.g. "/clara/service-volumes/c780b2cb-d26b-4151-8e7a-b4fbfb876f69").
  hostVolumePath: "/clara/service-volumes"

  # The disk capacity to reserve for the service volume.
  capacity: 10Gi

3.1.2. Render Server

Configuration of the Render Server is done via the values.yaml file in the Render Server Helm chart $HOME/.clara/charts/clara-renderer/values.yaml.

Some of the values defined in values.yaml are constants used throughout the Helm chart and are not user-configurable.

The following are user-configurable values:

# Configuration for the render service in the Clara Deploy SDK server.
rsds:
  # The disk capacity reserved for the  datasets.
  storage: 10Gi
  # The path on the node running Clara Deploy SDK where the datasets will be stored.
  # The subfolder path will have the name jobname-stagename of the triggered job.
  hostDatasetPath: "/clara-io/datasets"

3.2. Start/Stop/Restart the platform

To remove the deployment of the Clara Deploy SDK:

clara dicom stop
clara results stop
clara stop

To redeploy Clara:

clara start
clara results start
clara dicom start

3.3. Network Configuration

The containers expose data for the user to between the services. The main components with exposed ports are:

Component

Port

DICOM Adapter

104

Dashboard UI

8000

Render Server

8080

These ports are not configurable, so be sure that nothing else is using these ports.

3.4. AI models for TensorRT Inference Server

For AI inference application, trained models need to be made available for the NVIDIA TensorRT Inference Server. For details on what models are supported, how to configure a model, and TRTIS model repository, please refer to official NVIDIA documentation at the following link: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/.

Each model’s folder must be copied to the Clara Deploy host at the path, /clara/common/models.

3.5. Key Commands

To get the list of the deployments:

helm ls

To get the name of the pods running on the system:

kubectl get pods

Hint: The Clara pods will be the one with clara in the name.

To view the details of a pod:

kubectl get po [POD_NAME]
kubectl describe po [POD_NAME]

where the POD_NAME is acquired from the kubectl get pods command.

To view the container logs of the containers in the pod:

kubectl logs [POD_NAME] [CONTAINER_NAME]

where the POD_NAME is acquired from the kubectl get pods command and the CONTAINER_NAME can be retrieved from the kubectl describe pods [POD_NAME] command. If the pod contains one single container, the CONTAINER_NAME is sufficient.

To log into the containers:

kubectl exec -it [POD_NAME] --container [CONTAINER_NAME] -- /bin/sh

where the POD_NAME is acquired from the kubectl get pods command and CONTAINER_NAME can be retrieved from the kubectl describe pods [POD_NAME] command.