Get Started

Install (All Modalities)

View as Markdown

This guide covers installing NeMo Curator with support for all modalities and verifying your installation is working correctly. For a single-modality install or a 30-minute walkthrough, start with one of the modality quickstarts instead.

Before You Start

System Requirements

For comprehensive system requirements and production deployment specifications, refer to Production Deployment Requirements.

Quick Start Requirements:

  • OS: Ubuntu 24.04/22.04/20.04 (recommended)
  • Python: 3.10, 3.11, or 3.12
  • Memory: 16GB+ RAM for basic text processing
  • GPU (optional): NVIDIA GPU with 16GB+ VRAM for acceleration
  • CUDA 12 (required for audio_cuda12, video_cuda12, image_cuda12, and text_cuda12 extras)

Python 3.10 support will be removed in NeMo Curator 26.06. 26.04 is the last release to support Python 3.10. If you are setting up a new environment, install a newer supported Python version (3.11+) so you do not need to upgrade when moving to 26.06. See the 26.04 release notes for details.

Development vs Production

Use CaseRequirementsSee
Local DevelopmentMinimum specs listed aboveContinue below
Production ClustersDetailed hardware, network, storage specsDeployment Requirements
Multi-node SetupAdvanced infrastructure planningDeployment Options

Installation Methods

Choose one of the following installation methods based on your needs:

Docker is the recommended installation method for video and audio workflows. The NeMo Curator container includes FFmpeg (with NVENC support) pre-configured, avoiding manual dependency setup. Refer to the Container Installation tab below.

Install NeMo Curator from the Python Package Index using uv for proper dependency resolution.

  1. Install uv:

    $curl -LsSf https://astral.sh/uv/0.8.22/install.sh | sh
    $source $HOME/.local/bin/env
  2. Create and activate a virtual environment:

    $uv venv
    $source .venv/bin/activate
  3. Install NeMo Curator:

    $uv pip install torch wheel_stub psutil setuptools setuptools_scm
    $echo "transformers==4.55.2" > override.txt
    $uv pip install --no-build-isolation "nemo-curator[all]" --override override.txt

Install FFmpeg and Encoders (Required for Video)

Curator’s video pipelines rely on FFmpeg for decoding and encoding. If you plan to encode clips (using --transcode-encoder h264_nvenc or --transcode-encoder libvpx-vp9), install FFmpeg with NVENC and libvpx-vp9 support. The maintained install script bundles both.

Use the maintained script in the repository to build and install FFmpeg with NVIDIA NVENC and libvpx-vp9 support. The script enables --enable-cuda-nvcc, --enable-libnpp, and --enable-libvpx.

$curl -fsSL https://raw.githubusercontent.com/NVIDIA-NeMo/Curator/main/docker/common/install_ffmpeg.sh -o install_ffmpeg.sh
$chmod +x install_ffmpeg.sh
$sudo bash install_ffmpeg.sh

FFmpeg build requires CUDA toolkit (nvcc): If you encounter ERROR: failed checking for nvcc during FFmpeg installation, ensure that the CUDA toolkit is installed and nvcc is available on your PATH. You can verify with nvcc --version. If using the NeMo Curator container, FFmpeg is pre-installed with NVENC support.

Processing H.264/HEVC/AV1 inputs? You might still need a software decoder — even with NVENC/NVDEC.

Curator’s pipeline runs ffprobe for metadata extraction inside CPU-only Ray actors (VideoReader and ClipWriter). Those actors don’t have GPU visibility, so the bundled h264_cuvid / hevc_cuvid / av1_cuvid decoders can’t be opened from there. Without a software decoder, ffprobe exits non-zero and your h264/hevc/av1 inputs are silently skipped (you’ll see a SoftwareCodecMissingError in the logs).

Recommended fix: run the bundled installer inside the container — no image rebuild needed:

$bash /opt/Curator/docker/common/install_h264_support.sh

See Software H.264/HEVC/AV1 Codec Support below for the full picture (other paths, license notes, opt-in libopenh264 encoder).

Software H.264/HEVC/AV1 Codec Support (Advanced)

Curator’s default FFmpeg build deliberately ships NVDEC-only decoders for h264, hevc, and av1, and excludes software H.264 encoders (libopenh264, libx264, libx265). This keeps the codec footprint tight and routes every H.264/HEVC/AV1 decode through the GPU.

You may need to add software codec support in two cases:

  • H.264 inputs in CPU-only pipeline stages. VideoReader and ClipWriter invoke ffprobe from CPU-only Ray actors that can’t see the GPU; they need a software h264/hevc/av1 decoder to extract metadata. Without it you’ll get a SoftwareCodecMissingError pointing back here.
  • H.264 software encoding (for example, on GPUs without an NVENC encoder block such as A100 or H100, when VP9 isn’t acceptable).

The repository ships a runtime opt-in script that recompiles FFmpeg with software h264/hevc/av1 decoders enabled, optionally including the libopenh264 encoder. It runs inside an existing container — no image rebuild required.

$# Inside the container — adds h264/hevc/av1 software decoders only (LGPLv3):
$bash /opt/Curator/docker/common/install_h264_support.sh
$
$# Same plus the libopenh264 software h264 ENCODER, so --transcode-encoder=libopenh264 works:
$bash /opt/Curator/docker/common/install_h264_support.sh --with-libopenh264

The build takes ~5–10 minutes, replaces /usr/local/bin/{ffmpeg,ffprobe} in place, and pins to the same FFmpeg tag as the image build. Script source: docker/common/install_h264_support.sh.

License notice: the default mode adds only FFmpeg-internal decoders (LGPL). With --with-libopenh264 the binary additionally links Cisco’s OpenH264 (BSD-2-Clause + Cisco-distributed binary license — see https://www.openh264.org/BINARY_LICENSE.txt). You are responsible for any license obligations the resulting binaries impose on your distribution.

Option 2: Use the System FFmpeg

If you’re not using the Curator container, most Linux distributions ship FFmpeg with libx264 (and sometimes libopenh264) preinstalled:

$sudo apt-get install -y ffmpeg
$ffmpeg -hide_banner -encoders | grep -E "libx264|libopenh264"

Make sure the ffmpeg on your PATH is the one you want — it must shadow Curator’s bundled build.

Option 3: Edit install_ffmpeg.sh and Rebuild the Image

For users distributing customized images, edit docker/common/install_ffmpeg.sh before building the container:

  • For software h264/hevc/av1 decoders: append h264,hevc,av1 to the --enable-decoder=... list.
  • For libopenh264 encoder: add libopenh264-dev to the apt list, libopenh264 to --enable-encoder=..., and --enable-libopenh264 to the configure flags.
  • For libx264 encoder: add libx264-dev to the apt list and --enable-libx264 --enable-gpl to the configure flags. Note that --enable-gpl makes the resulting FFmpeg binary GPL-licensed.

Then rebuild your image.

Use the Encoder in ClipTranscodingStage

libopenh264 is accepted by ClipTranscodingStage out of the box. At setup time, the stage probes the local FFmpeg build and raises a clear error pointing back to this section if the encoder is not actually compiled in. Once your FFmpeg build includes it, just pass:

$python video_split_clip_example.py ... --transcode-encoder libopenh264

For other custom encoders not in SUPPORTED_ENCODERS (for example, libx264), edit nemo_curator/stages/video/clipping/clip_extraction_stages.py to extend the tuple, and add the encoder name to the --transcode-encoder argparse choices list in tutorials/video/getting-started/video_split_clip_example.py:

1SUPPORTED_ENCODERS = ("h264_nvenc", "libvpx-vp9", "libopenh264", "libx264") # add yours

Caveats

  • Default options for these encoders are not tuned. ClipTranscodingStage only sets quality presets for h264_nvenc and libvpx-vp9. Other encoders run with FFmpeg defaults, which may produce different quality/size trade-offs than you expect — see Configure encoders for how to pass an explicit bitrate.
  • The NeMo Curator team does not test custom encoder configurations. Issues filed against custom encoder builds may be closed.

Package Extras

NeMo Curator provides several installation extras to install only the components you need:

ExtraInstallation CommandDescription
text_cpuuv pip install nemo-curator[text_cpu]CPU-only text processing and filtering
text_cuda12uv pip install nemo-curator[text_cuda12]GPU-accelerated text processing with RAPIDS
audio_cpuuv pip install nemo-curator[audio_cpu]CPU-only audio curation with NeMo Toolkit ASR
audio_cuda12uv pip install nemo-curator[audio_cuda12]GPU-accelerated audio curation. When using uv, requires transformers==4.55.2 override.
image_cpuuv pip install nemo-curator[image_cpu]CPU-only image processing
image_cuda12uv pip install nemo-curator[image_cuda12]GPU-accelerated image processing with NVIDIA DALI
video_cpuuv pip install nemo-curator[video_cpu]CPU-only video processing
video_cuda12uv pip install --no-build-isolation nemo-curator[video_cuda12]GPU-accelerated video processing with CUDA libraries. Requires FFmpeg and additional build dependencies when using uv.
inference_serveruv pip install nemo-curator[inference_server]Ray Serve + vLLM for serving LLMs alongside curation pipelines
sdg_cpuuv pip install nemo-curator[sdg_cpu]CPU-only synthetic data generation with Data Designer
sdg_cuda12uv pip install nemo-curator[sdg_cuda12]GPU-accelerated synthetic data generation with local inference server support

Development Dependencies: For development tools (pre-commit, ruff, pytest), use uv sync --group dev --group linting --group test instead of pip extras. Development dependencies are managed as dependency groups, not optional dependencies.

pip is not supported for installing all extras together. Some optional dependencies have conflicting transitive version requirements (for example, nemo-toolkit[asr] and vllm require incompatible versions of transformers). NeMo Curator uses uv dependency overrides to resolve these conflicts, which pip does not support. If you must use pip, install only one modality extra at a time (for example, pip install nemo-curator[text_cpu]). For multi-modality installations, use uv or the NeMo Curator container.


Installation Verification

After installation, verify that NeMo Curator is working correctly:

1. Basic Import Test

1# Test basic imports
2import nemo_curator
3print(f"NeMo Curator version: {nemo_curator.__version__}")
4
5# Test core modules
6from nemo_curator.pipeline import Pipeline
7from nemo_curator.tasks import DocumentBatch
8print("✓ Core modules imported successfully")

2. GPU Availability Check

If you installed GPU support, verify GPU access:

1# Check GPU availability
2try:
3 import torch
4 if torch.cuda.is_available():
5 print(f"✓ GPU available: {torch.cuda.get_device_name(0)}")
6 print(f"✓ GPU memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB")
7 else:
8 print("⚠ No GPU detected")
9
10 # Check cuDF for GPU deduplication
11 import cudf
12 print("✓ cuDF available for GPU-accelerated deduplication")
13except ImportError as e:
14 print(f"⚠ Some GPU modules not available: {e}")

3. Run a Quickstart Tutorial

Try a modality-specific quickstart to see NeMo Curator in action: