Overview
Welcome to the Train Medical Imaging Models Using Base Command Lab on NVIDIA LaunchPad!
The goal of this lab is to use Medical Open Network for AI (MONAI), specifically MONAI Core, to train a deep neural network for 3D Brain Tumor Segmentation (BraTS). This lab will focus on part 1 of the Brain Tumor Segmentation 2021 challenge.
The MONAI framework is the open-source foundation being created by Project MONAI. MONAI is a freely available, community-supported, open-source PyTorch-based framework for labeling, training, deploying, and optimizing AI workflows in healthcare. MONAI Core gives developers and researchers a PyTorch-driven library for deep learning tasks that includes domain-optimized capabilities they need for developing medical imaging training workflows. Performance features such as MONAI Core’s AutoML, Smart Caching, GPU accelerated I/O and transforms takes training from days to hours, and hours to minutes, helping users accelerate AI into clinical production.
This tutorial will cover the following:
Workspace and dataset creation on NGC
BraTS 2021 Dataset Download from the Synapse Platform
ML Visualization and Optimization using wandb.ai, an NVIDIA partner
Running jobs using single and multi-gpus on Base Command Platform
Maximizing GPU utilization and accelerating the BraTS training and validation process
For more information on MONAI, visit https://monai.io/
See more MONAI tutorials here: https://github.com/Project-MONAI/tutorials
See more on NVIDIA’s work on the 2021 BraTS challenge here: https://developer.nvidia.com/blog/nvidia-data-scientists-take-top-spots-in-miccai-2021-brain-tumor-segmentation-challenge/
Gliomas are the most common malignant tumor in the central nervous system. They are built up of three nested heterogeneous sub-regions with variable histologic and genomic phenotypes. These regions are known as the enhancing tumor (ET), the tumor core (ED) and the whole tumor. These regions can be determined based upon the 4 aligned input MRI images (T1, T1Gd,T2, T2-FLAIR).
Figure taken from the BraTS IEEE TMI paper
This figure shows:
The whole tumor (yellow) visible in T2-FLAIR (Fig.A).
The tumor core (red) visible in T2 (Fig.B)
The enhancing tumor structures (light blue) visible in T1Gd, surrounding the cystic/necrotic components of the core (green) (Fig. C).
The segmentations are combined to generate the final labels of the tumor sub-regions (Fig.D): edema (yellow), non-enhancing solid core (red), necrotic/cystic core (green), enhancing core (blue).
The MRI images are then run through a neural network to generate the labels shown in the figure above.
Current neuroimaging systems are very qualitative or have very low quantitative accuracy and patient diaganosis remains poor. It currently still remains difficult to judge what the best segmentation strategies might be, the performance of different algorithms as well as the effectiveness of automated algorithms in comparison to human experts. The International Brain Tumor Segmentation Challenge aims to resolve these challenges by assessing state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans. They are currently in their tenth annual year of running this challenge.
See more on Nvidia’s work on the 2021 BraTS Challenge here
Dataset
For each dataset a user elects to use, the user is responsible for checking if the dataset license is fit for their intended purpose.
The dataset comes from http://braintumorsegmentation.org/
Source: BRATS 2021 dataset from https://synapse.org/
Challenge: Complex and heterogeneously-located targets
Target: Gliomas segmentation necrotic/active tumour and oedema
Modality: Multimodal multisite MRI data (T1, T1Gd,T2, T2-FLAIR)