Clara provides a training framework to help accelerate deep learning training and inference for medical imaging use cases. It allows medical imaging researchers and developers to quickly implement new models using a high-level intuitive API. This guide describes installation, essential concepts, and tutorials to help you get started with Clara.
User Guide
- Overview
- Converting from Clara 3.1 to Clara 4.0
- Essential concepts
- Installation
- Getting started with Clara
- Medical Model Archive (MMAR)
- Bring your own components (BYOC)
- AutoML
- API reference
- API reference: MONAI Applications
- API reference: MONAI Transforms
- API reference: MONAI Loss functions
- API reference: MONAI Network architectures
- API reference: MONAI Metrics
- API reference: MONAI Optimizers
- API reference: MONAI Data
- API reference: MONAI Engines
- API reference: MONAI Inference methods
- API reference: MONAI Event handlers
- API reference: MONAI Visualizations
- API reference: MONAI Utilities
- Clara Train FAQ
- 1. Why should I use Clara Train?
- 2. Does the order of handlers matter?
- 3. Is determinism supported?
- 4. How can I run the MMAR on a very small part of my dataset to quickly verify it?
- 5. How to enable sampling for classification task to balance the dataset?
- 6. How can I adjust the LR with the “ReduceLROnPlateau” scheduler based on validation metrics?
- 7. How can I set different LR for different network layers?
- 8. How can I enable TF32 for Ampere GPUs?
- 9. How can I enable auto mixed precision (AMP)?
- 10. How does AMP affect performance?
- 11. What can I do if AMP doesn’t show me any difference in the model memory footprint?
- 12. How can I save and load checkpoints of optimizer and lr_scheduler?
- 13. Can I use the LoadImageD transform to load DICOM image or serials?
- 14. How can I invert all the spatial transforms on a model prediction and compute metrics or save to NIFTI or PNG files?
- 15. How can I set “torch.backends.cudnn.benckmark=True” to accelerate training?
- 16. How can I set “find_unused_parameters=True” for distributed training if the network has some return values not included in loss computation?
- 17. How can I enable SyncBN for multi-gpu training?
- 18. Can I set several models, several losses, and several optimizers in a single MMAR?
- 19. Where does CheckpointSaver go for saving a final model?
- 20. How can I save “config_train.json” into the checkpoint during training and load the model config from checkpoint during validation?
- 21. How can I apply “EarlyStop” logic based on the loss value or validation metrics during model training?
- 22. How can I compute metrics on every class of the model output?
- 23. How can handlers be configured to execute only on one rank in distributed data parallel?
- 24. How can I register new events and trigger them in my own custom components?
- 25. In order to do transfer learning on a dataset with different class numbers, how can I load the model and finetune?
- 26. How can I save validation metrics details into reports?
- Appendix