Clara Train SDK v4.1 is a domain optimized developer application framework based on MONAI, the Medical Open Network for AI to help accelerate deep learning training and inference for medical imaging use cases. APIs are included for AI-Assisted Annotation, making any medical viewer AI capable, and the training framework with pre-trained models enables AI development with techniques such as Transfer Learning, Federated Learning, and AutoML.
Clara Train is built on top of the open-source framework MONAI, with MMARs for organization of model artifacts and with Bring your own components (BYOC) allowing for components from MONAI to be directly usable in Clara Train v4.1 MMARs.
If you are writing a paper and would like to reference Clara Train, the following example could be a way to do so:
NVIDIA Clara Imaging. https://developer.nvidia.com/clara-medical-imaging (2022).
- Clara Training Framework
- Pretrained Models
- AI-Assisted Annotation
- NVIDIA FLARE for Federated Learning
What’s new
The Clara Train 4.1 release is based on NVIDIA’s container for PyTorch release 21.10 with support for NVIDIA Ampere GPUs. Here is a list of changes and additions in this version:
The back end has been updated with MONAI v0.8, and the configurations have been changed to be more flexible and support future changes in MONAI. Please see Upgrading from previous versions of Clara Train for details on converting artifacts from previous versions of Clara Train.
The MMAR API allows for the creation of MMARs from python code.
Bring Your Own Workflow (BYOW) expands Bring your own components (BYOC) to allow for custom trainers, so you can now implement your own custom workflow.
Federated learning is implemented with the open source project NVIDIA FLARE, which has now been updated to version 2.0 with flexible configurations to support custom workflows and provide the ability to bring distributed computing to applications outside Clara as well.
Highlights
Links to some key features in Clara Train:
Digital pathology in an example MMAR is now available including optimized data loading using cuCIM, which can tile large datasets on-demand and process them through a CUDA-enabled pipeline. It includes a trained fully convolutional classification network that works with whole-slide images. Along with other features in Clara, you can achieve up to a 10x speedup in training compared to other pathology pipelines.
For Jupyter Notebooks with detailed examples, see Notebooks for Clara Train SDK
For greater customization, you can Bring your own components (BYOC) in addition to all of the already available open-source components in MONAI and PyTorch.
DeepGrow in AIAA can help with cold-start in annotation models for organs/objects of interest.