Duplicate Identification#
Use clip-level embeddings to identify near-duplicate video clips so your dataset remains compact, diverse, and efficient to train on.
Before You Start#
Make sure you have embeddings which are written by the
ClipWriterStageunderce1_embd_parquet/. For a runnable workflow, refer to the Split and Remove Duplicates Workflow. The embeddings must be in parquet files containing the columnsidandembedding.Verify local paths or configure S3-compatible credentials. Provide
storage_optionsin read/write keyword arguments when reading or writing cloud paths.
How it Works#
Duplicate identification operates on clip-level embeddings produced during processing:
Inputs
Parquet batches from
ClipWriterStageunderce1_embd_parquet/Columns:
id,embedding
Outputs
Cluster:
KMeansStagepartitions embeddings and writes centroid distances (for example,cosine_dist_to_cent).Pairwise:
PairwiseStagecomputes within-cluster similarity on GPU and, for each clip, emitsmax_idandcosine_sim_score. Ranking controls whether to prefer outliers (“hard”) or representatives (“easy”).Identify:
IdentifyDuplicatesStagefilters pairs withcosine_sim_score >= 1.0 - epsand writes Parquet files of duplicateids for removal during export.
Quickstart#
Use the semantic duplicate workflow with clip embeddings written to Parquet.
The SemanticDeduplicationWorkflow provides an end-to-end interface that orchestrates K-means clustering, pairwise similarity computation, and duplicate identification:
from nemo_curator.stages.deduplication.semantic.workflow import SemanticDeduplicationWorkflow
from nemo_curator.stages.deduplication.semantic.ranking import RankingStrategy
from nemo_curator.backends.xenna import XennaExecutor
workflow = SemanticDeduplicationWorkflow(
input_path="/path/to/embeddings/", # e.g., ce1_embd_parquet/
output_path="/path/to/duplicates/",
cache_path="/path/to/cache/", # Optional: defaults to output_path
n_clusters=1000,
id_field="id",
embedding_field="embedding",
embedding_dim=768, # Embedding dimension (768 for Cosmos-Embed1, varies by model)
input_filetype="parquet",
eps=0.1, # Similarity threshold: cosine_sim >= 1.0 - eps identifies duplicates
ranking_strategy=RankingStrategy.metadata_based(
metadata_cols=["cosine_dist_to_cent", "id"],
ascending=[True, True],
),
pairwise_batch_size=1024,
read_kwargs={"storage_options": None}, # Add S3 credentials here if needed
write_kwargs={"storage_options": None},
verbose=True,
)
# Run with XennaExecutor (GPU-accelerated)
executor = XennaExecutor()
results = workflow.run(executor)
Note
Determine eps first: Before running the full workflow, we recommend first running K-means and pairwise steps (set eps=None) to inspect similarity distributions and determine an appropriate eps threshold. See the tip below for details.
The workflow automatically:
Runs K-means clustering to partition embeddings into clusters
Computes pairwise similarity within each cluster
Identifies duplicates based on the
epsthresholdWrites duplicate IDs to
output_path/duplicates/
See also
For detailed information about how semantic deduplication works, see Semantic Deduplication. The algorithm and concepts are the same for video clips as for text documents.
For advanced users who need fine-grained control, you can run the stages individually:
from nemo_curator.pipeline import Pipeline
from nemo_curator.stages.deduplication.semantic.kmeans import KMeansStage
from nemo_curator.stages.deduplication.semantic.pairwise import PairwiseStage
from nemo_curator.stages.deduplication.semantic.ranking import RankingStrategy
from nemo_curator.stages.deduplication.semantic.identify_duplicates import IdentifyDuplicatesStage
pipe = Pipeline(name="semantic_dedup")
pipe.add_stage(
KMeansStage(
n_clusters=1000,
id_field="id",
embedding_field="embedding",
input_path="/path/to/embeddings/",
output_path="/path/to/kmeans_out/",
input_filetype="parquet",
embedding_dim=512,
)
)
pipe.add_stage(
PairwiseStage(
id_field="id",
embedding_field="embedding",
input_path="/path/to/kmeans_out/",
output_path="/path/to/pairwise_out/",
ranking_strategy=RankingStrategy.metadata_based(
metadata_cols=["cosine_dist_to_cent", "id"],
ascending=[True, True],
),
)
)
pipe.add_stage(
IdentifyDuplicatesStage(
output_path="/path/to/duplicates/",
eps=0.1,
)
)
pipe.run()
No example script flags are available for duplicate identification in the split pipeline. Run these stages as a separate job against Parquet embeddings written by the example pipeline’s writer.
Tip
Recommended Workflow: Determine eps First
The eps parameter is highly data-dependent and affects how many duplicates are identified. We recommend a two-step approach:
Step 1: Run K-means and pairwise without duplicate identification
Use
SemanticDeduplicationWorkflowwitheps=None(or run K-means and pairwise stages individually)This generates pairwise similarity scores without identifying duplicates
Step 2: Inspect the similarity distribution
Analyze the
cosine_sim_scorevalues in the pairwise resultsDetermine an appropriate
epsthreshold based on your data characteristicsFor example, if 20% of pairs have similarity ≥ 0.9, you might use
eps=0.1(sincecosine_sim >= 1.0 - eps)
Step 3: Run the full workflow with your chosen
epsUse
SemanticDeduplicationWorkflowwith the determinedepsvalueOr run
IdentifyDuplicatesStageseparately on the pairwise results
For a detailed example of this workflow with similarity analysis, see the Step-by-Step Semantic Deduplication tutorial (demonstrated on text data, but the approach applies to video clips as well).
Tip
Custom Ranking with Metadata Columns
If your embedding Parquet files contain additional metadata columns (such as video quality scores, duration, resolution, or other clip attributes), you can use RankingStrategy.metadata_based() to create custom ranking methods. This allows you to prioritize which clips to keep within duplicate groups based on your specific criteria.
For example, to prefer higher quality or longer duration clips:
from nemo_curator.stages.deduplication.semantic.ranking import RankingStrategy
# Prefer clips with higher quality scores, then longer duration
ranking_strategy = RankingStrategy.metadata_based(
metadata_cols=["quality_score", "duration"],
ascending=[False, False], # False = descending (higher is better)
)
# Or prefer clips closer to cluster centroid, then by quality
ranking_strategy = RankingStrategy.metadata_based(
metadata_cols=["cosine_dist_to_cent", "quality_score"],
ascending=[True, False], # Closer to centroid first, then higher quality
)
The metadata columns must be present in your embedding Parquet files and will be preserved through the K-means stage. Specify these columns using the metadata_fields parameter in KMeansStage or SemanticDeduplicationWorkflow.
Parameters#
Parameter |
Description |
|---|---|
|
Number of clusters for K‑means (for example, 1,000+ for multi‑million clip sets). |
|
Column name containing clip IDs (for example, |
|
Column with vector data (for example, |
|
Path to Parquet embeddings directory from the writer. |
|
Directory for K‑means outputs (sharded by cluster). |
|
Use |
|
Embedding dimension (Cosmos‑Embed1 varies by variant: 768 for most). |
Parameter |
Description |
|---|---|
|
Ranking strategy for selecting which clips to keep within clusters. Use |
|
Batch size for GPU pairwise computation (default |
|
Embedding dimension for memory estimates and batching. |
|
Column name containing clip IDs (for example, |
|
Column with vector data (for example, |
|
Path to K-means output directory (sharded by cluster). |
|
Directory for pairwise similarity outputs. |
Parameter |
Description |
|---|---|
|
Directory to write Parquet files containing duplicate |
|
Similarity threshold: pairs with |
|
Optional keyword arguments for reading files (including |
|
Optional keyword arguments for writing files (including |
|
Enable verbose logging (default |
The SemanticDeduplicationWorkflow accepts parameters from all three stages (KMeansStage, PairwiseStage, and IdentifyDuplicatesStage). See the tabs above for parameter descriptions.
Parameter |
Description |
|---|---|
|
Directory for intermediate results (K-means and pairwise outputs). Defaults to |
|
Optional keyword arguments for writing cache files (including |
|
Clear output directory before running (default |
|
List of metadata field names to preserve in output (optional). |
For parameters shared with individual stages, refer to:
KMeansStage tab:
input_path,output_path,n_clusters,id_field,embedding_field,embedding_dimPairwiseStage tab:
ranking_strategy,pairwise_batch_sizeIdentifyDuplicatesStage tab:
epsCommon parameters:
read_kwargs,write_kwargs,verbose
Removing Duplicates#
The duplicate identification stages (IdentifyDuplicatesStage or SemanticDeduplicationWorkflow with eps specified) write Parquet files containing duplicate clip IDs to the output directory (typically output_path/duplicates/). These files contain a single column id with the IDs of clips that should be removed.
It is your responsibility to exclude these duplicate IDs when exporting or persisting your final dataset. The removal process depends on how you want to persist and shard your data: