Duplicate Identification
Use clip-level embeddings to identify near-duplicate video clips so your dataset remains compact, diverse, and efficient to train on.
Before You Start
- Make sure you have embeddings which are written by the
ClipWriterStageunderce1_embd_parquet/. For a runnable workflow, refer to the Split and Remove Duplicates Workflow. The embeddings must be in parquet files containing the columnsidandembedding. - Verify local paths or configure S3-compatible credentials. Provide
storage_optionsin read/write keyword arguments when reading or writing cloud paths.
How it Works
Duplicate identification operates on clip-level embeddings produced during processing:
-
Inputs
- Parquet batches from
ClipWriterStageunderce1_embd_parquet/ - Columns:
id,embedding
- Parquet batches from
-
Outputs
- Cluster:
KMeansStagepartitions embeddings and writes centroid distances (for example,cosine_dist_to_cent). - Pairwise:
PairwiseStagecomputes within-cluster similarity on GPU and, for each clip, emitsmax_idandcosine_sim_score. Ranking controls whether to prefer outliers (“hard”) or representatives (“easy”). - Identify:
IdentifyDuplicatesStagefilters pairs withcosine_sim_score >= 1.0 - epsand writes Parquet files of duplicateids for removal during export.
- Cluster:
Quickstart
Use the semantic duplicate workflow with clip embeddings written to Parquet.
Single Step Workflow
The SemanticDeduplicationWorkflow provides an end-to-end interface that orchestrates K-means clustering, pairwise similarity computation, and duplicate identification:
Determine eps first: Before running the full workflow, we recommend first running K-means and pairwise steps (set eps=None) to inspect similarity distributions and determine an appropriate eps threshold. See the tip below for details.
The workflow automatically:
- Runs K-means clustering to partition embeddings into clusters
- Computes pairwise similarity within each cluster
- Identifies duplicates based on the
epsthreshold - Writes duplicate IDs to
output_path/duplicates/
For detailed information about how semantic deduplication works, see Semantic Deduplication. The algorithm and concepts are the same for video clips as for text documents.
Individual Stages
For advanced users who need fine-grained control, you can run the stages individually:
Script Flags
No example script flags are available for duplicate identification in the split pipeline. Run these stages as a separate job against Parquet embeddings written by the example pipeline’s writer.
Recommended Workflow: Determine eps First
The eps parameter is highly data-dependent and affects how many duplicates are identified. We recommend a two-step approach:
-
Step 1: Run K-means and pairwise without duplicate identification
- Use
SemanticDeduplicationWorkflowwitheps=None(or run K-means and pairwise stages individually) - This generates pairwise similarity scores without identifying duplicates
- Use
-
Step 2: Inspect the similarity distribution
- Analyze the
cosine_sim_scorevalues in the pairwise results - Determine an appropriate
epsthreshold based on your data characteristics - For example, if 20% of pairs have similarity ≥ 0.9, you might use
eps=0.1(sincecosine_sim >= 1.0 - eps)
- Analyze the
-
Step 3: Run the full workflow with your chosen
eps- Use
SemanticDeduplicationWorkflowwith the determinedepsvalue - Or run
IdentifyDuplicatesStageseparately on the pairwise results
- Use
For a detailed example of this workflow with similarity analysis, see the Step-by-Step Semantic Deduplication tutorial (demonstrated on text data, but the approach applies to video clips as well).
Custom Ranking with Metadata Columns
If your embedding Parquet files contain additional metadata columns (such as video quality scores, duration, resolution, or other clip attributes), you can use RankingStrategy.metadata_based() to create custom ranking methods. This allows you to prioritize which clips to keep within duplicate groups based on your specific criteria.
For example, to prefer higher quality or longer duration clips:
The metadata columns must be present in your embedding Parquet files and will be preserved through the K-means stage. Specify these columns using the metadata_fields parameter in KMeansStage or SemanticDeduplicationWorkflow.
Parameters
KMeansStage
PairwiseStage
IdentifyDuplicatesStage
SemanticDeduplicationWorkflow
The SemanticDeduplicationWorkflow accepts parameters from all three stages (KMeansStage, PairwiseStage, and IdentifyDuplicatesStage). See the tabs above for parameter descriptions.
For parameters shared with individual stages, refer to:
- KMeansStage tab:
input_path,output_path,n_clusters,id_field,embedding_field,embedding_dim - PairwiseStage tab:
ranking_strategy,pairwise_batch_size - IdentifyDuplicatesStage tab:
eps - Common parameters:
read_kwargs,write_kwargs,verbose
Removing Duplicates
The duplicate identification stages (IdentifyDuplicatesStage or SemanticDeduplicationWorkflow with eps specified) write Parquet files containing duplicate clip IDs to the output directory (typically output_path/duplicates/). These files contain a single column id with the IDs of clips that should be removed.
It is your responsibility to exclude these duplicate IDs when exporting or persisting your final dataset. The removal process depends on how you want to persist and shard your data: