Curate Text

Save and Export Text Data

View as Markdown

After processing your text datasets with NeMo Curator, use writer stages to export curated data for downstream use. Curator provides writers for common formats (JSONL, Parquet) as well as specialized writers for training frameworks.

Megatron Tokenization

MegatronTokenizerWriter tokenizes text documents and writes the .bin and .idx files required by Megatron-LM for data loading during pretraining. This replaces the need to run Megatron’s preprocess_data.py script separately and integrates tokenization directly into your curation pipeline.

How It Works

  1. Tokenizer loading: Downloads and loads a Hugging Face tokenizer specified by model_identifier. The tokenizer is downloaded once per node and loaded once per worker.
  2. Batched tokenization: Documents are tokenized in batches (controlled by tokenization_batch_size) to avoid out-of-memory issues on large datasets.
  3. Binary output: Tokenized data is written to a .bin file containing packed token IDs. Vocabulary sizes above 65,536 use 4 bytes per token (int32); smaller vocabularies use 2 bytes (uint16).
  4. Index output: A .idx file stores metadata including sequence lengths, byte offsets, and document boundaries for efficient random access during training.

Quick Start

1from nemo_curator.core.client import RayClient
2from nemo_curator.pipeline import Pipeline
3from nemo_curator.stages.text.io.reader import JsonlReader
4from nemo_curator.stages.text.io.writer.megatron_tokenizer import MegatronTokenizerWriter
5
6# Initialize Ray client
7ray_client = RayClient()
8ray_client.start()
9
10# Define pipeline stages
11stages = [
12 JsonlReader(
13 file_paths="/path/to/data",
14 fields=["text"],
15 ),
16 MegatronTokenizerWriter(
17 path="/path/to/output",
18 model_identifier="nvidia/NVIDIA-Nemotron-Nano-12B-v2",
19 append_eod=True,
20 ),
21]
22
23# Create and run the pipeline
24pipeline = Pipeline(
25 name="megatron-tokenize",
26 description="Tokenize dataset for Megatron-LM.",
27 stages=stages,
28)
29
30results = pipeline.run()
31
32ray_client.stop()

Configuration

ParameterTypeDefaultDescription
pathstrRequiredOutput directory for .bin and .idx files
model_identifierstrRequiredHugging Face model identifier or local path for the tokenizer
text_fieldstr"text"Name of the column containing text to tokenize
append_eodboolFalseAppend the tokenizer’s EOS token at the end of each document
tokenization_batch_sizeint1000Number of documents to tokenize per batch before writing to disk
cache_dirstr | NoneNoneLocal cache directory for the downloaded tokenizer
hf_tokenstr | NoneNoneHugging Face API token for accessing gated models

Output Format

The writer produces paired files for each input partition:

$output/
$├── {hash_1}.bin # Packed token IDs (binary)
$├── {hash_1}.idx # Sequence metadata (lengths, offsets, document boundaries)
$├── {hash_2}.bin
$├── {hash_2}.idx

.bin file: Contains concatenated token IDs for all documents in the partition. Token IDs are stored as int32 (4 bytes) when the tokenizer vocabulary exceeds 65,536 tokens, or uint16 (2 bytes) for smaller vocabularies such as GPT-2.

.idx file: Contains a fixed header followed by per-sequence metadata:

  • 9-byte magic header (MMIDIDX\x00\x00)
  • 8-byte version number
  • 1-byte dtype code
  • 8-byte sequence count
  • 8-byte document count
  • Per-sequence lengths: 4-byte int32 array (one entry per sequence)
  • Per-sequence byte offsets: 8-byte int64 array (one entry per sequence)
  • Document boundary indices: 8-byte int64 array (sequence count + 1 entries)

These files are directly compatible with Megatron-LM’s MMapIndexedDataset data loader.

End-of-Document Tokens

When append_eod=True, the tokenizer’s EOS token is appended to the end of each document’s token sequence. This is consistent with the behavior of Megatron’s preprocess_data.py and is required for some training configurations that use document boundaries for attention masking.

If the tokenizer does not define an EOS token, append_eod is automatically disabled with a warning.

Using Different Tokenizers

MegatronTokenizerWriter supports any tokenizer available through Hugging Face’s AutoTokenizer:

1MegatronTokenizerWriter(
2 path="output/",
3 model_identifier="nvidia/NVIDIA-Nemotron-Nano-12B-v2",
4 append_eod=True,
5)

Complete Pipeline Example

This example reads the TinyStories dataset from Parquet files and tokenizes it for Megatron-LM:

1from nemo_curator.core.client import RayClient
2from nemo_curator.pipeline import Pipeline
3from nemo_curator.stages.text.io.reader import ParquetReader
4from nemo_curator.stages.text.io.writer.megatron_tokenizer import MegatronTokenizerWriter
5
6ray_client = RayClient()
7ray_client.start()
8
9stages = [
10 ParquetReader(
11 file_paths="datasets/tinystories/",
12 ),
13 MegatronTokenizerWriter(
14 path="datasets/tinystories-tokens/",
15 model_identifier="nvidia/NVIDIA-Nemotron-Nano-12B-v2",
16 append_eod=True,
17 tokenization_batch_size=2000,
18 ),
19]
20
21pipeline = Pipeline(
22 name="megatron-tokenize",
23 description="Tokenize TinyStories for Megatron-LM.",
24 stages=stages,
25)
26
27results = pipeline.run()
28
29ray_client.stop()

A runnable version of this example is available in the tutorials directory.


For more information on using tokenized data with Megatron-LM, see the Related Tools page.