Text Embedding
Generate text embeddings for large-scale datasets using NeMo Curator’s built-in embedding stages. Text embeddings enable downstream tasks such as semantic deduplication, similarity search, and clustering.
How It Works
NeMo Curator provides three embedding backends for text data, each suited to different model sizes and throughput requirements:
EmbeddingCreatorStage— A composite stage that handles tokenization and embedding in sequence. Supports both Sentence Transformers’SentenceTransformerand Hugging Face’sAutoModelclasses via theuse_sentence_transformerflag.VLLMEmbeddingModelStage— A standalone stage that uses vLLM for GPU-accelerated embedding generation with optional pretokenization. Best for large embedding models where vLLM’s batching and GPU utilization provide significant throughput gains.SentenceTransformerEmbeddingModelStage— A model stage that uses thesentence-transformerslibrary directly. Used internally byEmbeddingCreatorStagewhenuse_sentence_transformer=True.
Choosing an Embedding Backend
Benchmarks on 5 GB of Common Crawl data show that vLLM outperforms Sentence Transformers for larger embedding models, while Sentence Transformers is faster for smaller models. The vLLM pretokenize mode provides the best per-task throughput across both model sizes when amortized over many tasks.
Quick Start
EmbeddingCreatorStage
VLLMEmbeddingModelStage (Recommended for Semantic Deduplication)
VLLMEmbeddingModelStage is the default embedding backend for semantic deduplication, using google/embeddinggemma-300m. It provides better GPU utilization and throughput for large embedding models. See the vLLM Embedder guide for setup, configuration, and code examples.
Available Embedding Tools
Integration with Semantic Deduplication
Text embeddings are a key input for semantic deduplication. The TextSemanticDeduplicationWorkflow uses VLLMEmbeddingModelStage internally, but you can also generate embeddings separately and feed them into the deduplication workflow for more control over the embedding process.