Clara Holoscan Deploy 0.8.1 EA
0.8.1

4. Clara Administration

4.1.1.Clara Deploy SDK Platform

Configuration of the Clara Deploy SDK is done via the values.yaml file in the Clara Deploy SDK Helm chart $HOME/.clara/charts/clara/values.yaml.

Some of the values defined in values.yaml are constants used throughout the Helm chart and are not user-configurable.

The following are user-configurable values:

Copy
Copied!
            

platformapiserver: pipelineTimeoutDefault: 600 # Default timeout in seconds for pipeline-jobs (when not supplied via the pipeline definition). pipelineTimeoutMaximum: 1800 # Maximum timeout in seconds for any pipeline-job to have. pipelineTimeoutGrace: 15 # Number of seconds given to a timed out pod once termination has started. maxStorageUsagePercentage: 80 # maximum storage to use on the Payload Volume, Setting near or above 85 may cause system instability. # Configuration for the Nvidia TensorRT Inference Server inferenceServerConfig: maxInstances: 1 modelLimits: 8 # Configuration for the Job Logs Archive jobLogsArchive: hostVolumePath: "/clara/job-logs-archive/" capacity: 10Gi # Configuration for Model Analyzer's Metrics storage metricsService: hostVolumePath: "/clara/repository/metrics" capacity: 250Mi # Configuration for the Model Storage Service modelService: hostVolumePath: "/clara/repository/models" capacity: 10Gi # Configuration for the payload service in the Clara Deploy SDK server. payloadService: # The path on the node running Clara Deploy SDK where payloads will be stored. # Every payload that is created by Clara Deploy SDK will exist as a directory inside # this path (e.g. "/clara/payloads/c780b2cbd26b41518e7ab4fbfb876f69"). hostVolumePath: "/clara/payloads" # The disk capacity to reserve for the payload volume. capacity: 10Gi # Configuration for the pod cleaner service. podCleaner: # Time between cleaner runs podCleanerFrequencyInMinutes: 30 # Time since creation before pods can be deleted, if succeeded or faulted # Above 60 minutes, because pipeline jobs are allowed to run for one hour podCleanerBufferInMinutes: 1 # Pod Cleaner can be disabled by setting this flag to false. podCleanerEnable: true # Configuration for the payload cleaner daemon. # This will clean up Persistent Volumes, Claims as well as Residual Data. payloadCleaner: # Time between cleaner runs payloadCleanerFrequencyInMinutes: 10 # Time since creation before payloads can be deleted, if succeeded or faulted # Above 60 minutes, because pipeline jobs are allowed to run for one hour payloadCleanerBufferInMinutes: 60 # payload Cleaner can be disabled by setting this flag to false. payloadCleanerEnable: true # Configuration for the common volume that is mounted (read-only) to all pipeline # service containers. # # The path that is used to mount this volume in each of the deployed pipeline # services is provided to those service containers via the # NVIDIA_CLARA_SERVICE_DATA_PATH environment variable. For example, a TensorRT # Inference Service may be defined in the pipeline using the following command: # # --models=$NVIDIA_CLARA_SERVICE_DATA_PATH/models # # to specify that the models repository used by the service will be stored # inside the 'models' subdirectory of this common volume. In the case of the # default hostVolumePath of "/clara/common", the path provided for the models # repository of the TRTIS container would then correspond to the host path of # "/clara/common/models". # # Since this volume is mounted read-only by services, it is expected that the # path to this volume already exists on the host and is populated with any # required data for the services that use it (such as models used by a TRTIS # service). If it does not exist at deployment time, it will be created and # will be empty. commonServiceVolume: # The path on the node running Clara Deploy SDK where the common volume will # be stored. hostVolumePath: "/clara/common" # The disk capacity to reserve for the common volume. capacity: 10Gi # Configuration for the service volume used by the Clara Deploy SDK server. # # Each service that is deployed by Clara Deploy SDK using a 'volume' connection # is provided with a persistent volume to which each of the volume connections # are mounted. For example, this volume connection: # # volume: # - name: VOlUME_PATH # path: /var/www # # Will provide a volume mount inside the service container that corresponds to # the host path of (using the default hostVolumePath) # # /clara/service-volumes/{serviceId}/var/www # # where {serviceId} is the unique ID generated by Clara Deploy SDK that was used # to deploy the service. serviceVolume: # The path on the node running Clara Deploy SDK where service volumes will be stored. # Every service that is deployed by Clara Deploy SDK will have a volume mounted in this # path (e.g. "/clara/service-volumes/c780b2cbd26b41518e7ab4fbfb876f69"). hostVolumePath: "/clara/service-volumes" # The disk capacity to reserve for the service volume. capacity: 10Gi #################################################### # Configuration Values for Log Collection # #################################################### clara-log-collector: enabled: true # Use this path to set where aggregated logs will be stored # default: "/clara/log-archive" # claraLogPath: elasticSearch: enabled: false # When Elastic Search is enabled for log publishing these configurations must be set. # If Elastic Search is secured, provide Username and create secret using # kubectl create secret to store password and provide the secret name to this configuration. #host: localhost #port: 9200 #scheme: http #user: es-user #passwordSecret: es-password-secret

4.1.2.Render Server

Configuration of the Render Server is done via the values.yaml file in the Render Server Helm chart $HOME/.clara/charts/clara-renderer/values.yaml.

Some of the values defined in values.yaml are constants used throughout the Helm chart and are not user-configurable.

The following are user-configurable values:

Copy
Copied!
            

# Configuration for the render service in the Clara Deploy SDK server. rsds: # The disk capacity reserved for the datasets. storage: 10Gi # The path on the node running Clara Deploy SDK where the datasets will be stored. # The subfolder path will have the name jobname-stagename of the triggered job. hostDatasetPath: "/clara-io/datasets"

4.1.3.Management Console

Configuration of the Configuration of the Clara Managment Console is done via the values.yaml file in the Clara Console Helm chart $HOME/.clara/charts/clara-ux/values.yaml.

Some of the values defined in values.yaml are constants used throughout the Helm chart and are not user-configurable.

The following are user-configurable values:

Copy
Copied!
            

# Configuration for storage (hostpath) for mongodb mongodb: persistence: hostPath: /clara/ux

To remove the the Clara Deploy SDK deployment:

Copy
Copied!
            

clara dicom stop clara render stop clara platform stop clara console stop

To redeploy Clara:

Copy
Copied!
            

clara platform start clara dicom start clara render start clara console start

To restart Clara:

Copy
Copied!
            

clara platform restart clara dicom restart clara render restart clara console restart

Note

Restarting the platform would also require a restart of DICOM Adapter and Render Server.

4.2.1.Restarting the Operating System

When restarted, a default operating-system deployment may enable swap automatically, causing commands like “helm ls” and “kubectl get pods” to fail. In this case, you will need to disable swap before underlying services can be accessed:

Copy
Copied!
            

sudo -i swapoff -a exit

After this command is executed, Clara services should start, but may take a few minutes.

The containers expose data for the user to between the services. The main components with exposed ports are:

Component

Port

DICOM Adapter

104

Dashboard UI

8000

Render Server

2050, 8080

Management Console

32002, 32003

Clara API

30031

These ports are not configurable, so be sure that nothing else is using these ports.

For AI inference application, trained models need to be made available for the NVIDIA TensorRT Inference Server. For details on what models are supported, how to configure a model, and TRTIS model repository, please refer to official NVIDIA documentation at the following link: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/.

Each model’s folder must be copied to the Clara Deploy host at the path, /clara/common/models.

To get the list of the deployments:

Copy
Copied!
            

helm ls

To get the name of the pods running on the system:

Copy
Copied!
            

kubectl get pods

Hint: The Clara pods will be the one with clara in the name.

To view the details of a pod:

Copy
Copied!
            

kubectl get po [POD_NAME] kubectl describe po [POD_NAME]

where the POD_NAME is acquired from the kubectl get pods command.

To view the container logs of the containers in the pod:

Copy
Copied!
            

kubectl logs [POD_NAME] [CONTAINER_NAME]

where the POD_NAME is acquired from the kubectl get pods command and the CONTAINER_NAME can be retrieved from the kubectl describe pods [POD_NAME] command. If the pod contains one single container, the CONTAINER_NAME is sufficient.

To log into the containers:

Copy
Copied!
            

kubectl exec -it [POD_NAME] --container [CONTAINER_NAME] -- /bin/sh

where the POD_NAME is acquired from the kubectl get pods command and CONTAINER_NAME can be retrieved from the kubectl describe pods [POD_NAME] command.

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved.. Last updated on Feb 1, 2023.