Install NeMo Framework#

The NeMo Framework can be installed in the following ways, depending on your needs:

  • Container Runtime (Docker/Enroot). NeMo Framework supports Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), and Text-to-Speech (TTS) modalities within a single consolidated container. This method is recommended method for LLM and MM domains.

  • Conda/Pip: If you are using an NVIDIA PyTorch container as the base, this is the recommended method for all domains.

Below are the installation instructions specific to each domain:

NeMo containers are launched concurrently with NeMo version updates. You can find additional information about released containers on the NeMo releases page.

Use a Pre-built Container (Recommended Method)

This is suitable for most users who want to use a stable version of NeMo. To get access to the container, follow these steps:

  1. Log in, or create a free account here: NVIDIA GPU Cloud (NGC).

  2. Once logged in, you can view all container releases here: NVIDIA NGC NeMo Framework.

  3. Set up your NGC credentials. For SLURM clusters, refer to Set Up the SLURM Cluster Credential at the bottom of this tab for detailed steps.

  4. In your terminal, run the following code:

docker pull nvcr.io/nvidia/nemo:25.04

Please use the latest tag in the form of “yy.mm.(patch)”.

If you are interested in the latest experimental features, get the container with the “dev” tag.

Build a Container from the Latest GitHub Branch (Alternate Method)

To build a NeMo container using a Dockerfile from a NeMo GitHub branch, clone the branch and run the following code:

DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest

If you choose to work with the main branch, we recommend using NVIDIA’s PyTorch container version 23.10-py3 as the base, and then installing from the source on GitHub.

docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
                    -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
                    stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.10-py3

Set Up the SLURM Cluster Credential

  1. Authenticate with the nvcr.io registry

    Note: Generating a new key resets access for the NGC CLI.

  2. Set the credentials for the container registry.

NeMo containers can also be used with Enroot, which is often preferred on HPC and SLURM clusters.

Use a Pre-built Container with Enroot

  1. Log in, or create a free account here: NVIDIA GPU Cloud (NGC).

  2. Once logged in, you can view all container releases here: NVIDIA NGC NeMo Framework.

  3. Set up your NGC credentials. For SLURM clusters, refer to Set Up the SLURM Cluster Credential at the bottom of this tab for detailed steps.

  4. Pull and convert the container image for Enroot:

enroot import docker://nvcr.io/nvidia/nemo:25.04
  1. Create and start an Enroot container:

enroot create -n nemo nvcr.io+nvidia+nemo+25.04.squashfs
enroot start nemo

Please use the latest tag in the form of “yy.mm.(patch)”.

Set Up the SLURM Cluster Credential

  1. Authenticate with the nvcr.io registry

    Note: Generating a new key resets access for the NGC CLI.

  2. Create a credential file:

    • Do this on a cluster login node.

    • Create ~/.config/enroot/.credentials with:

      machine [ARTIFACTORY-URL] login [USERNAME] password [ENCRYPTED-PASSWORD]
      machine [GITLAB-URL] login [USERNAME] password [TOKEN]
      machine [NGC-REGISTRY-URL] login $oauthtoken password [NGC-API-KEY]
      machine [AUTH-URL] login $oauthtoken password [NGC-API-KEY]
      
  3. Set permissions:

    chmod 0600 ~/.config/enroot/.credentials
    

Important

We strongly recommend that you start with a base NVIDIA PyTorch container: nvcr.io/nvidia/pytorch:24.07-py3, and install NeMo and it’s dependencies inside. If doing so, run the container, and go to step 3 directly for pip instructions.

  1. Create a fresh Conda environment:

conda create --name nemo python==3.10.12
conda activate nemo
  1. Install PyTorch using their configurator:

conda install pytorch==2.2.0 torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

The command to install PyTorch may vary depending on your system. Use the configurator linked above to find the right command for your system.

Then, install NeMo via pip or from the source. We do not provide NeMo on the Conda-forge or any other Conda channel.

  1. (Option 1) Install NeMo via pip.

To install the nemo_toolkit, use the following installation method:

Install all Domains

apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython packaging
pip install nemo_toolkit['all']

Install a Specific Domain

If you only need Speech AI functionality, you can install the asr and/or tts components individually. You must first install the base nemo_toolkit dependencies as follows:

apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython packaging

Then, you can run the following domain-specific commands:

pip install nemo_toolkit['asr'] # For ASR

pip install nemo_toolkit['tts'] # For TTS

To install the LLM domain, use the command pip install nemo_toolkit['all'].

  1. (Option 2) Install NeMo and Megatron Core from source.

If you work with the LLM and MM domains, there are three additional dependencies: NVIDIA Megatron Core, NVIDIA Apex, and NVIDIA Transformer Engine.

export BRANCH="main"

apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython packaging
pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]

Depending on the shell used, you may need to use the “nemo_toolkit[all]” specifier instead in the above command.

  1. Install additional dependencies (only for LLM, Multimodal domains):

If you work with the LLM and Multimodal domains, there are three additional dependencies: NVIDIA Megatron Core, NVIDIA Apex and NVIDIA Transformer Engine.

Follow the latest instructions at GitHub: Install LLM and Multimodal dependencies.

Note

Apex and Transformer Engine are optional for LLM collections, but recommended for optimal performance. However, they are required dependencies for MM collections

Note

Currently, RMSNorm requires Apex to be installed. Support for using RMSNorm without Apex is planned for a future release.