Get Started

Get Started with Image Curation

View as Markdown

This guide provides step-by-step instructions for setting up NeMo Curator’s image curation capabilities. Follow these instructions to prepare your environment and execute your first image curation pipeline.

Prerequisites

Ensure your environment meets the following prerequisites for NeMo Curator image curation modules:

  • Python 3.10, 3.11, or 3.12
    • packaging >= 22.0
  • Ubuntu 22.04/20.04
  • NVIDIA GPU (required for all image modules)
    • Volta™ or higher (compute capability 7.0+)
    • CUDA 12 (or above)

If uv is not installed, refer to the Installation Guide for setup instructions, or install it quickly with:

$curl -LsSf https://astral.sh/uv/0.8.22/install.sh | sh
$source $HOME/.local/bin/env

Installation Options

You can install NeMo Curator using one of the following methods:

Install the image modules from PyPI:

$uv pip install "nemo-curator[image_cuda12]"

Download Sample Configuration

NeMo Curator provides a working image curation example in the Image Curation Tutorial. You can adapt this pipeline for your own datasets.

Set Up Data Directory

Create directories to store your image datasets and models:

$mkdir -p ~/nemo_curator/data/tar_archives
$mkdir -p ~/nemo_curator/data/curated
$mkdir -p ~/nemo_curator/models

For this example, you’ll need:

  • Tar Archives: JPEG images in .tar files (text and JSON files are ignored during loading)
  • Model Directory: CLIP and classifier model weights (downloaded automatically on first run)

Basic Image Curation Example

Here’s a simple example to get started with NeMo Curator’s image curation pipeline:

CPU Memory Considerations

Image loading and decoding happens in CPU memory before GPU processing. If you encounter out-of-memory errors during the ImageReaderStage, reduce:

  • batch_size: Number of images per batch (reduce to 32-50 for systems with limited RAM)
  • num_threads: Parallel decoding threads (reduce to 4 for systems with limited RAM)
  • num_cpus: Ray Client CPU allocation (reduce to 8-16 for systems with limited RAM)

The example below uses conservative defaults suitable for most systems. For high-memory systems, you can increase these values for better performance.

To configure Ray with limited CPU resources:

1from nemo_curator.core.client import RayClient
2ray_client = RayClient(num_cpus=8) # Adjust based on available CPU cores
3ray_client.start()
1from nemo_curator.pipeline import Pipeline
2from nemo_curator.backends.xenna import XennaExecutor
3from nemo_curator.stages.file_partitioning import FilePartitioningStage
4from nemo_curator.stages.image.io.image_reader import ImageReaderStage
5from nemo_curator.stages.image.embedders.clip_embedder import ImageEmbeddingStage
6from nemo_curator.stages.image.filters.aesthetic_filter import ImageAestheticFilterStage
7from nemo_curator.stages.image.filters.nsfw_filter import ImageNSFWFilterStage
8from nemo_curator.stages.image.io.image_writer import ImageWriterStage
9
10# Create image curation pipeline
11pipeline = Pipeline(name="image_curation", description="Basic image curation with quality filtering")
12
13# Stage 1: Partition tar files for parallel processing
14pipeline.add_stage(FilePartitioningStage(
15 file_paths="~/nemo_curator/data/tar_archives", # Path to your tar archive directory
16 files_per_partition=1,
17 file_extensions=[".tar"],
18))
19
20# Stage 2: Read images from tar files using DALI
21pipeline.add_stage(ImageReaderStage(
22 batch_size=50,
23 verbose=True,
24 num_threads=4,
25 num_gpus_per_worker=0.25,
26))
27
28# Stage 3: Generate CLIP embeddings for images
29pipeline.add_stage(ImageEmbeddingStage(
30 model_dir="~/nemo_curator/models", # Directory containing model weights
31 model_inference_batch_size=32,
32 num_gpus_per_worker=0.25,
33 remove_image_data=False,
34 verbose=True,
35))
36
37# Stage 4: Filter by aesthetic quality (keep images with score >= 0.5)
38pipeline.add_stage(ImageAestheticFilterStage(
39 model_dir="~/nemo_curator/models",
40 score_threshold=0.5,
41 model_inference_batch_size=32,
42 num_gpus_per_worker=0.25,
43 verbose=True,
44))
45
46# Stage 5: Filter NSFW content (remove images with score >= 0.5)
47pipeline.add_stage(ImageNSFWFilterStage(
48 model_dir="~/nemo_curator/models",
49 score_threshold=0.5,
50 model_inference_batch_size=32,
51 num_gpus_per_worker=0.25,
52 verbose=True,
53))
54
55# Stage 6: Save curated images to new tar archives
56pipeline.add_stage(ImageWriterStage(
57 output_dir="~/nemo_curator/data/curated",
58 images_per_tar=1000,
59 remove_image_data=True,
60 verbose=True,
61))
62
63# Execute the pipeline
64executor = XennaExecutor()
65pipeline.run(executor)

Expected Output

After running the pipeline, you’ll have:

~/nemo_curator/data/curated/
├── images-{hash}-000000.tar # Curated images (first shard)
├── images-{hash}-000000.parquet # Metadata for corresponding tar
├── images-{hash}-000001.tar # Curated images (second shard)
├── images-{hash}-000001.parquet # Metadata for corresponding tar
├── ... # Additional shards as needed

Output Format Details:

  • Tar Files: Contain high-quality .jpg files that passed both aesthetic and NSFW filtering
  • Parquet Files: Contain metadata for each corresponding tar file, including image paths, IDs, and processing scores
  • Naming Convention: Files use hash-based prefixes (e.g., images-a1b2c3d4e5f6-000000.tar) for uniqueness across distributed processing
  • Scores: Processing metadata includes aesthetic_score and nsfw_score stored in the Parquet files

Alternative: Using the Complete Tutorial

For a more comprehensive example with data download and more configuration options, see:

$# Download the complete tutorial
$wget -O ~/nemo_curator/image_curation_example.py https://raw.githubusercontent.com/NVIDIA/NeMo-Curator/main/tutorials/image/getting-started/image_curation_example.py
$
$# Run with your data
$python ~/nemo_curator/image_curation_example.py \
> --input-wds-dataset-dir ~/nemo_curator/data/tar_archives \
> --output-dataset-dir ~/nemo_curator/data/curated \
> --model-dir ~/nemo_curator/models \
> --aesthetic-threshold 0.5 \
> --nsfw-threshold 0.5

Next Steps

Explore the Image Curation documentation for more advanced processing techniques: