Clara Train SDK is a domain optimized developer application framework that includes APIs for AI-Assisted Annotation, making any medical viewer AI capable and v4.0 enables a MONAI based training framework with pre-trained models to start AI development with techniques such as Transfer Learning, Federated Learning, and AutoML.
Using the open-source framework MONAI means Clara Train is now PyTorch-based compared to the previous versions of Clara Train before v4.0 which were based on TensorFlow. The concepts and Medical Model Archive (MMAR) for organization of model artifacts and Bring your own components (BYOC) continue to exist, with components from MONAI directly usable in Clara Train v4.0 MMARs.
What’s new
The Clara Train 4.0 release is based on NVIDIA’s container for PyTorch release 21.02 with support for NVIDIA Ampere GPUs. Here is a list of changes and additions in this version as well as links to key features:
The back end has been replaced with MONAI and uses the PyTorch Ignite training loop. Please see Converting from Clara 3.1 to Clara 4.0 for details on converting artifacts from previous versions of Clara Train.
AutoML has been modified to work with the latest version of Clara Train, and Federated learning still has easy server and client deployment through the use of an administration client like before although on the back end Federated learning has been split into a separate project so it is usable for applications other than Clara too.
For Jupyter Notebooks with detailed examples, see Notebooks for Clara Train SDK
You can use either MMARs to set up training configurations with json.
For greater customization, you can Bring your own components (BYOC) in addition to all of the already available open-source components in MONAI and PyTorch.
For AIAA:
AIAA updates to Triton API
DeepGrow is introduced in AIAA to help with cold-start in any organs/objects of interest.