Content Processing & Cleaning
Clean, normalize, and transform text content to meet specific requirements for training language models using NeMo Curator’s tools and utilities.
Content processing involves transforming your text data while preserving essential information. This includes fixing encoding issues and standardizing text format to ensure high-quality input for model training.
How it Works
Content processing transformations typically modify documents in place or create new versions with specific changes. Most processing tools follow this pattern:
- Load your dataset using pipeline readers (JsonlReader, ParquetReader)
- Configure and apply the appropriate processor
- Save the transformed dataset for further processing
You can combine processing tools in sequence or use them alongside other curation steps like filtering and language management.
Available Processing Tools
Add unique identifiers to documents for tracking and deduplication identifiers tracking preprocessing deduplication
Fix Unicode issues, standardize spacing, and remove URLs unicode normalization preprocessing urls
Usage
Here’s an example of a typical content processing pipeline:
Common Processing Tasks
Text Normalization
- Fix broken Unicode characters (mojibake)
- Standardize whitespace and newlines
- Remove or normalize special characters
Content Sanitization
- Strip unwanted URLs or links
- Remove boilerplate text or headers
Format Standardization
- Ensure consistent text encoding
- Normalize punctuation and spacing
- Standardize document structure