Data Loading Concepts
This guide covers the core concepts for loading and managing text data from local files in NVIDIA NeMo Curator.
Pipeline-Based Data Loading
NeMo Curator uses a pipeline-based architecture for handling large-scale text data processing. Data flows through processing stages that transform tasks, enabling distributed processing of local files.
The system provides two primary readers for text data:
- JsonlReader - For JSON Lines format files (most common)
- ParquetReader - For columnar Parquet files (better performance for large datasets with PyArrow optimization)
Both readers support optimization through:
- Field selection - Reading specified columns to reduce memory usage
- Partitioning control - Using
blocksizeorfiles_per_partitionto optimizeDocumentBatchsizes during distributed processing - Recommended block size - Use ~128MB for optimal object store performance with smaller data chunks
Optimization Strategies
Partitioning Control
Partitioning Strategy: Specify either files_per_partition or blocksize. If files_per_partition is provided, blocksize is ignored.
Performance Recommendations
- Block size and files per partition: Use ~128MB for optimal performance. Very large batches lead to memory overheads when passing data between stages through the object store, while very small batches induce overhead from processing many more tasks. We recommend ~128MB as a good balance. Try to avoid going below 32MB or above 1GiB partition sizes.
- Field selection: Specify
fieldsparameter to read required columns only - Engine choice: ParquetReader defaults to PyArrow with
dtype_backend="pyarrow"for optimal performance and memory efficiency. If you encounter compatibility issues with certain data types or schemas, you can override these defaults throughread_kwargs:
Data Export Options
NeMo Curator provides flexible export options for processed datasets:
Common Loading Patterns
Multi-Source Data
You cannot combine different reader types (JsonlReader + ParquetReader) in the same pipeline stage. For different file types, you would need to create a new CustomReader from the underlying BaseReader that can read based on different extensions provided.
Remote Data Sources
This page focuses on loading text data from local files using JsonlReader and ParquetReader. Both readers support remote storage locations (Amazon S3, Azure) when you provide remote file paths.
For downloading and processing data from remote sources like ArXiv, Common Crawl, and Wikipedia, refer to the Data Acquisition Concepts page which covers:
- URLGenerator, DocumentDownloader, DocumentIterator, DocumentExtractor components
- Built-in support for Common Crawl, ArXiv, Wikipedia, and custom sources
- Integration patterns with pipeline-based processing
- Configuration and scaling strategies
The data acquisition process produces standardized output that integrates seamlessly with the pipeline-based loading concepts described on this page.