MONAI Toolkit
MONAI Version 2.0

System Requirements

This section outlines the necessary prerequisites and recommended hardware for using MONAI Toolkit effectively.

Before installing the MONAI Toolkit Container, ensure your system meets the following requirements:

Using the MONAI Toolkit Container requires the host system to have the following installed:

The MONAI Toolkit 2.0 is based on the NVIDIA PyTorch container release 24.03.02. Release 24.03 is based on CUDA 12.4.0.41, which requires NVIDIA Driver release 545 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 470.57 (or later R470), 525.85 (or later R525), 535.86 (or later R535), or 545.23 (or later R545).

For a full list of supported software and specific versions that come packaged with the frameworks based on the container image, see the Framework Containers Support Matrix and the NVIDIA Container Toolkit Documentation.

For optimal performance and compatibility with MONAI Toolkit 2.0, we recommend the following software versions:

  • Docker Engine: 20.10 or later

  • NVIDIA GPU Drivers: 545.23 or later

  • NVIDIA Container Toolkit: 1.13.0 or later

Note

It’s important to regularly check for and install the latest versions of these software components, as they frequently receive updates that can improve performance and security.

The hardware requirements for MONAI Toolkit depend on the specific models and datasets you’ll be working with. Here are the general recommendations:

MONAI Toolkit has been validated on V100 and A100 GPUs, but should generally be supported on GPUs such as H100, A100, A40, A30, A10, V100, and more.

The following system configuration is recommended to achieve reasonable training and inference performance with MONAI Toolkit and supported models provided:

  • At least 12GB of GPU RAM and up to 32GB depending on the network or model you use

  • At least 32 GB of RAM - and the more RAM the better

  • At least 50 GB of SSD Disk Space

Note

Performance can significantly improve with better hardware. For instance, more GPU RAM can allow for larger batch sizes during training.

For a complete list of supported hardware, please see the NVIDIA AI Enterprise Product Support Matrix.

Warning

Known limitations: Some very large models or datasets may require more than 32GB of GPU RAM. If you encounter out-of-memory errors, consider using a GPU with more memory or techniques like gradient accumulation to train with smaller batch sizes.

Previous Toolkit Support Matrix
Next Module Overview
© | | | | | | |. Last updated on Aug 8, 2024.