NVIDIA Clara is an open, scalable computing platform that enables development of medical imaging applications for hybrid (embedded, on-premise, or cloud) computing environments to create intelligent instruments and automated healthcare pipelines.
The Clara Deploy SDK provides a platform to deploy medical imaging pipelines that can include:
for CT, MRI and ultrasound data. These all leverage Docker based containers and Kubernetes to help virtualize medical image pipelines via connecting to PACS or scale medical instrument applications for any instrument. These all leverage Docker based containers and Kubernetes to help virtualize medical image pipelines via connecting to PACS or scale medical instrument applications for any instrument.
This documentation provides information on getting started with the Clara Deploy SDK. It describes an example application, pipelines generation, Clara containers, and also provides release note information. Developers who are deploying or developing on Clara should have at least a basic understanding of the related technologies, including Docker, Kubernetes, Helm and TensorRT Inference Server (TRTIS).
The Clara Deploy SDK is a collection of containers that work together to provide an end to end medical image processing pipelines. The overall ecosystem can run on different cloud providers or local hardware with Pascal or newer GPUs. The Clara Deploy SDK can be broken up into the Core Platform, Core Services, Integrations and Applications as seen in the diagram below:
The Clara Deploy SDK is run via a collection of components that are deployed via Helm charts as pictured in the diagram below:
1.2.1. Clara Core Platform¶
Clara Core component runs as the central part of the Clara Deploy SDK and controls all aspects of Clara payloads, pipelines, jobs, and results. It performs the following tasks:
Accepts and executes pipelines as jobs.
Deploys pipeline containers when needed via Helm, and removes them when those resources are needed elsewhere.
Is the source of all system state truth.
1.2.2. DICOM Adapter¶
DICOM Adapter is the integration point between the hospital PACS and the Clara Deploy SDK. In the typical Clara Deploy SDK deployment, it is the first data interface to Clara. DICOM Adapter provides the ability to receive DICOM data, put the data in a payload, and trigger a pipeline. When the pipeline has produced a result, DICOM Adapter moves the result to a PACS.
1.2.3. Results Service¶
A Clara service that tracks all results generated by all pipelines. It bridges the results between the pipelines and the services delivering the results to external devices.
1.2.4. Clara Pipeline¶
A Clara pipeline is a collection of containers that are configured to work together to execute a medical image processing task. Clara publishes an API that enables any container to be added to a pipeline in the Clara Deploy SDK. These containers are Docker containers based on nvidia-docker with applications enhanced to support the Clara Container pipeline Driver.
1.2.5. Tensor RT Inference Server¶
The Tensor RT Inference Server is an inferencing solution optimized for NVIDIA GPUs. It provides an inference service via an HTTP or gRPC endpoint.
1.2.6. Render Server¶
Render Server provides visualization of medical data.
1.2.7. Clara I/O Model¶
The Clara I/O model is designed to follow standards that can be executed and scaled by Kubernetes. Payloads and results are separated to preserve payloads for restarting when unsuccessful or interrupted for priority, and to allow the inputs to be reused in multiple pipelines.
Payloads (inputs) are presented as read-only volumes.
Jobs are the execution of a pipeline on a payload.
Results (outputs, scratch space) are read-write volumes, paired with the jobs that created them in one-to-one relationships.
Datasets (reusable inputs) are read-only volumes.
Ports allow for application I/O.
Pipeline configurations are Helm charts.
Runtime container configuration is passed via a YAML file.