Get Started

Get Started with Text Curation

View as Markdown

This guide provides step-by-step instructions for setting up NeMo Curator’s text curation capabilities. Follow these instructions to prepare your environment and execute your first text curation pipeline.

Prerequisites

To use NeMo Curator’s text curation modules, ensure your system meets the following requirements:

  • Python 3.10, 3.11, or 3.12
    • packaging >= 22.0
  • uv (for package management and installation)
  • Ubuntu 22.04/20.04
  • NVIDIA GPU (optional for most text modules, required for GPU-accelerated operations)
    • Volta™ or higher (compute capability 7.0+)
    • CUDA 12 (or later)

If uv is not installed, refer to the Installation Guide for setup instructions, or install it quickly using:

$curl -LsSf https://astral.sh/uv/0.8.22/install.sh | sh
$source $HOME/.local/bin/env

Installation Options

You can install NeMo Curator using one of the following methods:

The simplest way to install NeMo Curator:

$uv pip install "nemo-curator[text_cuda12]"

For other modalities (image, video) or all modules, see the Installation Guide.

Prepare Your Environment

NeMo Curator uses a pipeline-based architecture for processing text data. Before running your first pipeline, ensure you have a proper directory structure:

Set Up Data Directory

Create the following directories for your text datasets:

$mkdir -p ~/nemo_curator/data/sample
$mkdir -p ~/nemo_curator/data/curated

For this example, you need sample JSONL files in ~/nemo_curator/data/sample/. Each line should be a JSON object with at least text and id fields. You can create test data or refer to Read Existing Data and Data Loading for information on downloading data.

Set your HuggingFace token to avoid rate limiting when downloading models or datasets:

export HF_TOKEN=“your_token_here”

Without a token, repeated downloads from Hugging Face may result in 429 Client Error (rate limiting). Get a free token at huggingface.co/settings/tokens.

Basic Text Curation Example

Here’s a simple example to get started with NeMo Curator’s pipeline-based architecture:

1from nemo_curator.pipeline import Pipeline
2from nemo_curator.stages.text.io.reader import JsonlReader
3from nemo_curator.stages.text.io.writer import JsonlWriter
4from nemo_curator.stages.text.modules.score_filter import ScoreFilter
5from nemo_curator.stages.text.filters import WordCountFilter, NonAlphaNumericFilter
6
7# Create a pipeline for text curation
8pipeline = Pipeline(
9 name="text_curation_pipeline",
10 description="Basic text quality filtering pipeline"
11)
12
13# Add stages to the pipeline
14pipeline.add_stage(
15 JsonlReader(
16 file_paths="~/nemo_curator/data/sample/",
17 files_per_partition=4,
18 fields=["text", "id"]
19 )
20)
21
22# Add quality filtering stages
23pipeline.add_stage(
24 ScoreFilter(
25 filter_obj=WordCountFilter(min_words=50, max_words=100000),
26 text_field="text",
27 score_field="word_count"
28 )
29)
30
31pipeline.add_stage(
32 ScoreFilter(
33 filter_obj=NonAlphaNumericFilter(max_non_alpha_numeric_to_text_ratio=0.25),
34 text_field="text",
35 score_field="non_alpha_score"
36 )
37)
38
39# Write the curated results
40pipeline.add_stage(
41 JsonlWriter("~/nemo_curator/data/curated")
42)
43# Execute the pipeline
44results = pipeline.run()
45
46print(f"Pipeline completed successfully! Processed {len(results) if results else 0} tasks.")

Next Steps

Explore the Text Curation documentation for more advanced filtering techniques, GPU acceleration options, and large-scale processing workflows.