Clara Pipeline Generation Guide (Internal document, not updated)

Execute the Clara reference pipeline locally with the following command:

Copy
Copied!
            

cd `git rev-parse --show-toplevel`/deploy/clara-reference-workflow ./clara-wf test # default: ai-only test

Execute other pipelines locally with the following commands:

Copy
Copied!
            

./clara-wf test 0fcdcdea-89c6-456f-b20a-bc7379c54ef2 # ct-ai, not verified yet ./clara-wf test 1db65f99-c9b7-4329-ab9c-d519e0557638 # organ-seg ./clara-wf test fd3ee8bf-b9f3-4808-bd60-243f870ff9bd # liver-seg, not verified yet ./clara-wf test a78b1e7a-0325-4d42-b90f-207eb576b14f # ct-partial ./clara-wf test 1995f10e-ee14-4d67-b307-452051637dbb # dicom-reader-only ./clara-wf test 8371285c-8ea8-41cd-958e-44e5664c6ec3 # dicom-reader-writer ./clara-wf test 613e3fce-0d96-495a-a760-f43e68deefd8 # recon-only ./clara-wf test 21dd9b92-f307-4e68-ba31-fe8b1449ceea # recon-ai ./clara-wf test 50d31ed4-eeac-47e6-b584-c562f22a0091 # ai-only ./clara-wf test eb66f705-13ea-402c-8baa-143fb3fd9cd5 # dicom-writer-only

The first time you execute a pipeline you need the nvcr.io API key. Obtain that at the following website: https://ngc.nvidia.com/configuration/api-key

Set NGC_API_KEY environment variable if you want to set the API key without user interaction.

Copy
Copied!
            

gbae@gigony:/ssd/gitlab/clara/devenv/sdk/deploy/clara-reference-workflow$ ./clara-wf test Error from server (NotFound): secrets "nvcr.io" not found Get NGC API key from https://ngc.nvidia.com/configuration/api-key Then, enter the NGC API key: secret/nvcr.io created TRTIS is not deployed. Installing TRTIS... NAME: trtis LAST DEPLOYED: Tue Feb 19 10:45:54 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE trtis ClusterIP 10.99.58.169 <none> 8000/TCP,8001/TCP,8002/TCP 12s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE trtis 1 1 1 1 12s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE trtis-5586fb977f-b5swh 1/1 Running 0 12s

NOTES:

  1. Get the application URL with the following commands:

Copy
Copied!
            

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=trtis,app.kubernetes.io/instance=trtis"-o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:8000

Output is similar to the following:

Copy
Copied!
            

TRTIS Pod Name : trtis-5586fb977f-b5swh NVIDIA_CLARA_TRTISURI : 10.99.58.169:8000 Sending build context to Docker daemon 446.2MB Step 1/20 : FROM ubuntu:16.04 ---> 7e87e2b3bf7a Step 2/20 : ENV PYVER=3.5 ---> Using cache ---> 9450c12cd9d9 Step 3/20 : ENV LC_ALL=C.UTF-8 ---> Using cache ---> 544d9f54d099 Step 4/20 : ENV LANG=C.UTF-8 ---> Using cache ---> 0fba046f8f9c Step 5/20 : ENV USER=root ---> Using cache ---> e2bc9b37a014 Step 6/20 : ENV HOME=/root ---> Using cache ---> ef5a9e22dfa1 Step 7/20 : RUN apt-get update && apt-get install -y --no-install-recommends python3.5 curl libcurl3 rsync ca-certificates && curl -O https://bootstrap.pypa.io/get-pip.py && rm -rf /var/lib/apt/lists/* ---> Using cache ---> e63e95fd3adf Step 8/20 : RUN rm -f /usr/bin/python && rm -f /usr/bin/python`echo $PYVER | cut -c1-1` && ln -s /usr/bin/python$PYVER /usr/bin/python && ln -s /usr/bin/python$PYVER /usr/bin/python`echo $PYVER | cut -c1-1` ---> Using cache ---> 4b43cbdb82a1 Step 9/20 : RUN python$PYVER get-pip.py && rm get-pip.py ---> Using cache ---> 81d740a157b0 Step 10/20 : WORKDIR /app ---> Using cache ---> fde61fa86219 Step 11/20 : RUN mkdir -p input output ---> Using cache ---> 92dd23ee1f7a Step 12/20 : COPY ./sdk_dist/*.whl ./sdk_dist/ ---> Using cache ---> af363d6802b2 Step 13/20 : COPY ./trtis_client/*.whl ./trtis_client/ ---> Using cache ---> fce8222e35d2 Step 14/20 : COPY ./app_vnet ./app_vnet ---> Using cache ---> a022c4ffd9f5 Step 15/20 : COPY ./Pipfile ./ ---> Using cache ---> 198e66a82e3c Step 16/20 : RUN pip install --upgrade setuptools pipenv ---> Using cache ---> 56e71ad5dec4 Step 17/20 : RUN pipenv install --python 3.5 ./trtis_client/*.whl ./sdk_dist/*.whl --skip-lock && pipenv run pip install -r ./app_vnet/requirements.txt ---> Using cache ---> 4c327317cc6f Step 18/20 : RUN rm -rf $(pipenv --venv)/lib/python3.5/site-packages/future/backports/test/*.pem ---> Using cache ---> 9bb79383e05e Step 19/20 : RUN chmod -R 777 /root ---> Using cache ---> c665700a8b38 Step 20/20 : ENTRYPOINT ["pipenv", "run", "python", "app_vnet/main.py"] ---> Using cache ---> a52298d0027d Successfully built a52298d0027d Successfully tagged ai-vnet:latest NAME: test-wf LAST DEPLOYED: Mon Feb 25 00:34:58 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pv-clara-payload-volume-claim-00000000-0000-0000-0000-000000000000 Bound pv-clara-payload-volume-00000000-0000-0000-0000-000000000000 20Gi RWO pv-clara-payload-00000000-0000-0000-0000-000000000000 2s ==> v1/Job NAME COMPLETIONS DURATION AGE test-wf-clara-workflow 0/1 2s 2s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE test-wf-clara-workflow-z5tds 0/1 ContainerCreating 0 2s ==> v1/PersistentVolume NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-clara-payload-volume-00000000-0000-0000-0000-000000000000 20Gi RWO Retain Bound default/pv-clara-payload-volume-claim-00000000-0000-0000-0000-000000000000 pv-clara-payload-00000000-0000-0000-0000-000000000000 2s NOTES: Job 'Test Job' (00000000-0000-0000-0000-000000000000) executed! release "test-wf" deleted release "trtis" deleted ls -al /ssd/gitlab/clara/devenv/sdk/deploy/clara-reference-workflow/test-data/00000000-0000-0000-0000-000000000000/ai-vnet/output total 37644 drwxrwxr-x 2 root root 4096 Feb 25 00:35 . drwxrwxr-x 4 gbae gbae 4096 Feb 25 00:35 .. -rw-r--r-- 1 root root 324 Feb 25 00:35 recon.seg.mhd -rw-r--r-- 1 root root 38535168 Feb 25 00:35 recon.seg.raw

The Clara reference pipeline has the following folder structure:

Copy
Copied!
            

. ├── charts │ ├── clara-workflow │ │ ├── charts │ │ ├── Chart.yaml │ │ ├── _stage_values.yaml │ │ ├── templates │ │ │ ├── deployment.yaml │ │ │ ├── _helpers.tpl │ │ │ ├── NOTES.txt │ │ │ ├── stages │ │ │ │ ├── ai-vnet │ │ │ │ │ ├── container.yaml │ │ │ │ │ └── volume.yaml │ │ │ │ └── template │ │ │ │ ├── _container.yaml │ │ │ │ └── _volume.yaml │ │ │ ├── tests │ │ │ │ └── test-connection.yaml │ │ │ ├── volume-claim.yaml │ │ │ └── volume.yaml │ │ └── values.yaml │ └── trtis │ ├── charts │ ├── Chart.yaml │ ├── templates │ │ ├── deployment.yaml │ │ ├── _helpers.tpl │ │ ├── NOTES.txt │ │ ├── service.yaml │ │ └── tests │ │ └── test-connection.yaml │ └── values.yaml ├── clara-wf ├── README.md └── test-data └── 00000000-0000-0000-0000-000000000000 └── ai-vnet └── input ├── recon.mhd └── recon.raw

Clara includes the clara-wf CLI tool to manage Helm charts for stages. Running clara-wf without arguments returns the following help information:

Copy
Copied!
            

$ ./clara-wf ./clara-wf [command] [arguments...] Command List list_stages | list | ls : List available stages create_stage | create | cs [stage name] : Create stage delete_stage | delete | ds [stage name] : Delete stage test_workflow | test [workflow id] [job id] : Test workflow default args: ai-only 00000000-0000-0000-0000-000000000000 install_trtis : Deploy TRTIS service uninstall_trtis : Delete TRTIS service set_ngc_api_key | setkey : Reset NGC API key

Currently, only the ai-vnet stage is available now, with ‘ai-only’ pipeline as default.

Add the dicom-reader stage with the following commands (and expected output):

Copy
Copied!
            

$ ./clara-wf ls ai-vnet $ ./clara-wf cs dicom-reader Stage 'dicom-reader' is created at: /ssd/gitlab/clara/devenv/sdk/deploy/clara-reference-workflow/charts/clara-workflow/templates/stages/dicom-reader Default values for Stage 'dicom-reader' are added to: /ssd/gitlab/clara/devenv/sdk/deploy/clara-reference-workflow/charts/clara-workflow/values.yaml

Test the pipelines with the following commands (and expected output):

Copy
Copied!
            

$ ./clara-wf ls ai-vnet dicom-reader $ ls charts/clara-workflow/templates/stages/dicom-reader container.yaml volume.yaml

When ./clara-wf cs dicom-reader is executed, the charts/clara-workflow/templates/stages/template directory is copied to charts/clara-workflow/templates/stages/dicom-reader, and the prefix _ is removed from file names.

container.yaml

The file container.yaml in charts/clara-workflow/templates/stages/dicom-reader specifies the dicom-reader clara container (stage).

Most settings can be used as they are, but modify this file to make container-specific settings, if needed. (See container v1 core specification) for more information.

Copy
Copied!
            

{{- define "stages.dicom-reader.container" }} {{- $stageName := "dicom-reader" }} {{- $stageIndex := 0 }} {{- $activeStages := (index .workflows .workflow.id) }} {{- range $idx, $elem := $activeStages.stages }} {{- if (eq $elem $stageName) }}{{- $stageIndex = $idx }}{{- end }} {{- end }} {{- $args := (index $activeStages.args $stageIndex) }} {{- $waitName := (index $activeStages.waitLocks $stageIndex) }} {{- $stage := (index .stages $stageName) -}} image: "{{$stage.image.repository }}:{{$stage.image.tag }}" imagePullPolicy: IfNotPresent {{- if $stage.command }} command: {{ $stage.command }} {{- end }} {{- if $args }} args: {{ $args }} {{- end }} env: - name: NVIDIA_CLARA_STAGENAME value: "{{$stageName}}" - name: NVIDIA_CLARA_RUNSIMPLE value: "{{ .runSimple }}" - name: NVIDIA_CLARA_TRTISURI value: "{{ .trtisUri }}" - name: NVIDIA_CLARA_APPDIR value: "{{$stage.appDir }}" - name: NVIDIA_CLARA_INPUTLOCK value: "{{$stage.inputLock }}" - name: NVIDIA_CLARA_INPUTS value: "{{$stage.inputs }}" - name: NVIDIA_CLARA_JOBID value: "{{ .job.id }}" - name: NVIDIA_CLARA_JOBNAME value: "{{ .job.name }}" - name: NVIDIA_CLARA_LOGNAME value: "{{$stage.logName }}" - name: NVIDIA_CLARA_LOCKDIR value: "{{$stage.lockDir }}" - name: NVIDIA_CLARA_LOCKNAME value: "{{$stage.lockName }}" - name: NVIDIA_CLARA_OUTPUTS value: "{{$stage.outputs }}" - name: NVIDIA_CLARA_TIMEOUT value: "{{$stage.timeout }}" - name: NVIDIA_CLARA_WAITNAME value: "{{$waitName}}" {{- end }}

volume.yaml

volume.yaml specifies volumes that are specific to the stage.

See Volumes and its APIs for more information.

Copy
Copied!
            

{{- define "stages.dicom-reader.volume" }} {{- $stageName := "dicom-reader" }} {{- $stageIndex := 0 }} {{- $activeStages := (index .workflows .workflow.id) }} {{- range $idx, $elem := $activeStages.stages }} {{- if (eq $elem $stageName) }}{{- $stageIndex = $idx }}{{- end }} {{- end }} {{- $activeStagesArr := (index .workflows .workflow.id).stages }} {{- $activeStagesLen := (len $activeStagesArr) }} {{- $firstStageName := (index $activeStagesArr 0 ) }} {{- $lastStageName := (index $activeStagesArr (add $activeStagesLen -1)) }} {{- $stage := (index .stages $stageName) -}} # Input folder - name: clara-payload-volume mountPath: "{{$stage.appDir }}/{{$stage.mount.in.name }}" {{- if (and .dicomAdapter.input (eq $firstStageName $stageName)) }} subPath: "{{ .dicomAdapter.input }}" {{- else }} subPath: "{{ index$activeStages.ioFolders$stageIndex}}" {{- end }} # Output folder - name: clara-payload-volume mountPath: "{{$stage.appDir }}/{{$stage.mount.out.name }}" {{- if (and .dicomAdapter.output (eq $lastStageName $stageName)) }} subPath: "{{ .dicomAdapter.output }}" {{- else }} subPath: "{{ index$activeStages.ioFolders (add$stageIndex1) }}" {{- end }} # Locks folder (shared by containers) - name: clara-payload-volume mountPath: "{{$stage.appDir }}/locks" subPath: "locks" # Payload folder (shared by containers. For direct accessing to the payload of other containers.) - name: clara-payload-volume mountPath: "payloads" {{- end }}

values.yaml

The files container.yaml and volume.yaml use Helm template to fill necessary values from values.yaml in the charts/clara-workflow/ folder.

Concrete values regarding pipeline and stage in values.yaml can be used in container.yaml and volume.yaml by using $stages object (e.g., mountPath: "{{ $stage.appDir }}/{{ $stage.mount.out.name }}").

Copy
Copied!
            

# Pipeline parameters ## Value used here should be one of key in 'workflows' object workflow: id: "ai-only" # Need to specify manually/programmatically payloads: hostPath: "" # Need to specify manually/programmatically storage: 20Gi # Job parameters job: id: "00000000-0000-0000-0000-000000000000" # Need to specify manually/programmatically name: "Test Job" # Need to specify manually/programmatically # DICOM Adapter parameters ## Set both values (input & output) to the empty string if you don't want to use DICOM Adapter dicomAdapter: input: "dicom-server" # First stage's input host path would be "/clara-io/clara-core/payloads/{{job.id}}/dicom-server" output: "output" # Last stage's output host path would be "/clara-io/clara-core/payloads/{{job.id}}/output" # Miscellaneous parameters runSimple: "FALSE" # Need to specify manually/programmatically trtisUri: "localhost:8000" # Need to specify manually/programmatically # Pipeline definitions ## - Keys that work with DICOM Adapter should match with pipeline IDs used in DICOM Adapter ## - An item 'args' array should be an array of strings if the arguments exist (If not, specify the empty string) workflows: ct: stages: ["dicom-reader", "recon", "ai-vnet", "dicom-writer"] waitLocks: ["", "dicom-reader.lock", "recon.lock", "ai-vnet.lock"] args: ["", "", "", ""] ai: stages: ["dicom-reader", "ai-livertumor", "dicom-writer"] waitLocks: ["", "dicom-reader.lock", "ai-livertumor.lock"] args: ["", "", ""] dicom-reader-only: stages: ["dicom-reader"] waitLocks: [""] args: [""] ai-only: stages: ["ai-vnet"] waitLocks: [""] args: [""] recon-only: stages: ["recon"] waitLocks: [""] args: [""] dicom-writer-only: stages: ["dicom-writer"] waitLocks: [""] args: [""] # Stage definitions stages: ... ##BEGIN_dicom-reader## dicom-reader: image: repository: dicom-reader tag: latest mount: in: name: "input" out: name: "output" stageName: "dicom-reader" appDir: "/app" inputLock: "/app/locks/input.lock" inputs: "input/input.mhd" logName: "/app/logs/dicom-reader.log" outputs: "output/output.mhd" lockDir: "/app/locks" lockName: "dicom-reader.lock" timeout: "300" ##END_dicom-reader## ...

The default stage values shown above come from charts/clara-workflow/_stage_values.yaml The default workflow/stage values can be overridden when installed by Helm.

An excerpt from the clara-wf script (test_workflow method) shows how the values are overridden:

Copy
Copied!
            

local workflow_id="ai-only" local job_id="00000000-0000-0000-0000-000000000000" [ -n "$1" ] && workflow_id="$1" [ -n "$2" ] && job_id="$2" # Build Clara container images locally (Need to update whenever new stages are added) ## ai-vnet docker build -t ai-vnet -f ${TOP}/Applications/Operators/ai/app_vnet/Dockerfile ${TOP}/Applications/Operators/ai # Deploy the Helm chart for the reference pipeline ('ai-only' pipeline is used below) ## - Do not set 'runSimple="TRUE"' when interacting with Clara Core ## - Set 'workflow.id' to other values (such as 'ct') to execute different workflow ## - Set 'workflow.payloads.hostPath' to host payloads path ({{workflow.payloads.hostPath}}/{{job.id}} would be payload folder for a job) ## - Set 'job.id' in UUID form (otherwise, execution will fail) ## - Set 'dicomAdapter.input' to the empty string if the first stage's input is not from DICOM Adapter output ## - Set 'dicomAdapter.output' to the empty string if you don't want to push the last stage's output to DICOM Adapter ## - Use '--dry-run --debug' option for debugging ## - Refer to https://helm.sh/docs/using_helm/#the-format-and-limitations-of-set for `--set` and `--set-string` options. helm install --name test-wf --namespace "${CLARA_NAMESPACE}" \ --set-string runSimple="TRUE" \ --set-string workflow.id="${workflow_id}" \ --set-string workflow.payloads.hostPath="${SCRIPT_DIR}/test-data" \ --set-string job.id="${job_id}" \ --set-string job.name="Test Job" \ --set-string trtisUri="${NVIDIA_CLARA_TRTISURI}" \ --set-string dicomAdapter.input="" \ --set-string dicomAdapter.output="" \ --wait \ ${SCRIPT_DIR}/charts/clara-workflow

As you see from the command above, variables such as workflow.payloads.hostPath need to be specified manually or programmatically to indicate correct information that is availble in runtime.

Once correct values are specified in container.yaml and volume.yaml for dicom-reader, you must modify the above command to build container image for dicom-reader and prepare test data.

Copy
Copied!
            

gbae@gigony:/ssd/gitlab/clara/devenv/sdk/deploy/clara-reference-workflow/test-data$ tree . ├── 00000000-0000-0000-0000-000000000000 │ └── ai-vnet │ └── input │ ├── recon.mhd │ └── recon.raw └── 11111111-1111-1111-1111-111111111111 └── dicom-reader └── input └── patient-id └── study-uid └── series-uid └── SOP-uid.dcm

Finally, execute ./clara-wf test "dicom-reader-only" "11111111-1111-1111-1111-111111111111"

The procedures above add dicom-reader stage to the reference workflow. Once all stages needed for CT or AI pipeline are added and each stage is tested with the clara-wf test command, Clara Core is able to use the pipeline Helm chart and the helm install command can be executed programatically.

© Copyright 2018-2019, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.