Curate TextProcess DataQuality Assessment

Quality Assessment & Filtering

View as Markdown

Score and remove low-quality content using heuristics and ML classifiers to prepare your data for model training using NeMo Curator’s tools and utilities.

Large datasets often contain many documents considered “low quality.” In this context, “low quality” means data we do not want downstream models to learn from, and “high quality” is data we do want them to learn from. The metrics that define quality can vary widely.

How It Works

NeMo Curator’s filtering framework is built around several key components that work within the data processing architecture :

The ScoreFilter is at the center of filtering in NeMo Curator. It applies a filter to a document and optionally saves the score as metadata:

1from nemo_curator.pipeline import Pipeline
2from nemo_curator.stages.text.io.reader import JsonlReader
3from nemo_curator.stages.text.io.writer import JsonlWriter
4from nemo_curator.stages.text.modules import ScoreFilter
5from nemo_curator.stages.text.filters import WordCountFilter
6
7# Create pipeline
8pipeline = Pipeline(name="quality_filtering")
9
10# Load dataset
11reader = JsonlReader(
12 file_paths="books_dataset/*.jsonl",
13 fields=["text", "id"]
14)
15pipeline.add_stage(reader)
16
17# Create and apply filter
18filter_stage = ScoreFilter(
19 filter_obj=WordCountFilter(min_words=80),
20 text_field="text",
21 score_field="word_count",
22)
23pipeline.add_stage(filter_stage)
24
25# Save filtered dataset
26writer = JsonlWriter(path="long_books/")
27pipeline.add_stage(writer)
28
29# Execute pipeline (uses XennaExecutor by default)
30results = pipeline.run()

Default Executor: When you call pipeline.run() without specifying an executor, NeMo Curator automatically uses XennaExecutor() as the default. You can optionally specify a different executor by passing it as a parameter: pipeline.run(executor=my_executor).

The filter object implements two key methods:

  • score_document: Computes a quality score for a document
  • keep_document: Determines if a document should be kept based on its score

Filtering Approaches

Usage

NeMo Curator provides programmatic interfaces for document filtering through the Pipeline framework:

1from nemo_curator.pipeline import Pipeline
2from nemo_curator.stages.text.io.reader import JsonlReader
3from nemo_curator.stages.text.io.writer import JsonlWriter
4from nemo_curator.stages.text.modules import ScoreFilter
5from nemo_curator.stages.text.filters import WordCountFilter
6
7# Create and configure pipeline
8pipeline = Pipeline(name="document_filtering")
9
10# Add data loading
11reader = JsonlReader(
12 file_paths="/path/to/input/data/*.jsonl",
13 fields=["text", "id"]
14)
15pipeline.add_stage(reader)
16
17# Add filtering stage
18filter_stage = ScoreFilter(
19 filter_obj=WordCountFilter(min_words=80),
20 text_field="text",
21 score_field="word_count"
22)
23pipeline.add_stage(filter_stage)
24
25# Add output stage
26writer = JsonlWriter(path="/path/to/output/filtered/")
27pipeline.add_stage(writer)
28
29# Execute pipeline (uses XennaExecutor by default)
30results = pipeline.run()

Best Practices

When filtering large datasets, consider these performance tips:

  1. Order matters: Place computationally inexpensive filters early in your pipeline
  2. Batch size tuning: Adjust batch sizes based on your hardware capabilities
  3. Use vectorization: Implement batched methods for compute-intensive filters
  4. Disk I/O: Consider compression and chunking strategies for large datasets
  5. Distributed processing: For TB-scale datasets, use distributed filtering with the XennaExecutor