Curate VideoProcess Data

Captions and Preview

View as Markdown

Prepare inputs, generate captions, optionally enhance them, and produce preview images.

Choosing a Captioning Model

The video captioning pipeline supports two model families. Pick a variant based on quality, GPU memory, and throughput:

VariantModelDefault Use Case
qwenQwen/Qwen2.5-VL-7B-InstructDefault — good quality/throughput balance
nemotron / nemotron-bf16Nemotron Nano 12B v2 VL (BF16)High-quality captions; auto-downloaded from Hugging Face
nemotron-fp8Nemotron Nano 12B v2 VL (FP8)Same model, FP8-quantized for lower memory
nemotron-nvfp4Nemotron Nano 12B v2 VL (NVFP4-QAD)NVFP4 quantization-aware-distilled checkpoint

Caption enhancement (the optional second-pass LLM rewrite) uses Qwen-LM (--enhance-captions-algorithm qwen_lm).


Quickstart

Use the pipeline stages or the example script flags to prepare captions and preview images.

1from nemo_curator.pipeline import Pipeline
2from nemo_curator.stages.video.caption.caption_preparation import CaptionPreparationStage
3from nemo_curator.stages.video.caption.caption_generation import CaptionGenerationStage
4from nemo_curator.stages.video.caption.caption_enhancement import CaptionEnhancementStage
5from nemo_curator.stages.video.preview.preview import PreviewStage
6
7pipe = Pipeline(name="captions_preview")
8pipe.add_stage(
9 CaptionPreparationStage(
10 model_variant="qwen",
11 prompt_variant="default",
12 prompt_text=None,
13 sampling_fps=2.0,
14 window_size=256,
15 remainder_threshold=128,
16 preprocess_dtype="float16",
17 model_does_preprocess=False,
18 generate_previews=True,
19 verbose=True,
20 )
21)
22pipe.add_stage(PreviewStage(target_fps=1.0, target_height=240, verbose=True))
23pipe.add_stage(
24 CaptionGenerationStage(
25 model_dir="/models",
26 model_variant="qwen",
27 caption_batch_size=8,
28 fp8=False,
29 max_output_tokens=512,
30 model_does_preprocess=False,
31 generate_stage2_caption=False,
32 stage2_prompt_text=None,
33 disable_mmcache=True,
34 )
35)
36pipe.run()

To use Nemotron instead, set model_variant="nemotron" (or one of nemotron-bf16, nemotron-fp8, nemotron-nvfp4) on both CaptionPreparationStage and CaptionGenerationStage — Nemotron weights are auto-downloaded from Hugging Face on first use.

Preparation and previews

  1. Prepare caption inputs from each clip window. This step splits clips into fixed windows, formats model‑ready inputs for the chosen VLM (Qwen‑VL or Nemotron), and optionally stores per‑window mp4 bytes for previews.

    1from nemo_curator.stages.video.caption.caption_preparation import CaptionPreparationStage
    2from nemo_curator.stages.video.preview.preview import PreviewStage
    3
    4prep = CaptionPreparationStage(
    5 model_variant="qwen", # or "nemotron" / "nemotron-fp8" / ...
    6 prompt_variant="default",
    7 prompt_text=None,
    8 sampling_fps=2.0,
    9 window_size=256,
    10 remainder_threshold=128,
    11 preprocess_dtype="float16",
    12 model_does_preprocess=False,
    13 generate_previews=True,
    14 verbose=True,
    15)
  2. Optionally generate .webp previews from each window’s mp4 bytes for quick QA and review.

    1preview = PreviewStage(
    2 target_fps=1.0,
    3 target_height=240,
    4 verbose=True,
    5)

Parameters

ParameterTypeDefaultDescription
model_variantstr"qwen"Vision‑language model used to format inputs. One of qwen, nemotron, nemotron-bf16, nemotron-fp8, nemotron-nvfp4.
prompt_variantav-surveillance"default"Built‑in prompt to steer caption content when prompt_text is not provided.
prompt_textstr | NoneNoneCustom prompt text. When set, overrides prompt_variant.
sampling_fpsfloat2.0Source sampling rate for creating per‑window inputs.
window_sizeint256Number of frames per window before captioning.
remainder_thresholdint128Minimum leftover frames required to create a final shorter window.
model_does_preprocessboolFalseWhether the downstream model performs its own preprocessing.
preprocess_dtypestr"float32"Data type for any preprocessing performed here.
generate_previewsboolTrueWhen True, return per‑window mp4 bytes to enable preview generation.
verboseboolFalseLog additional setup and per‑clip details.

Caption generation and enhancement

  1. Generate window‑level captions with the chosen VLM (Qwen‑VL or Nemotron). This stage reads clip.windows[*].qwen_llm_input (created earlier) and writes window.caption["qwen"] (or window.caption["nemotron"], depending on the variant).

    1from nemo_curator.stages.video.caption.caption_generation import CaptionGenerationStage
    2from nemo_curator.stages.video.caption.caption_enhancement import CaptionEnhancementStage
    3
    4gen = CaptionGenerationStage(
    5 model_dir="/models",
    6 model_variant="qwen", # or "nemotron" / "nemotron-fp8" / ...
    7 caption_batch_size=8,
    8 fp8=False,
    9 max_output_tokens=512,
    10 model_does_preprocess=False,
    11 generate_stage2_caption=False,
    12 stage2_prompt_text=None,
    13 disable_mmcache=True,
    14)
  2. Optionally enhance captions with a text‑based LLM (Qwen‑LM) to expand and refine descriptions. This stage reads window.caption["qwen"] and writes window.enhanced_caption["qwen_lm"].

    1enh = CaptionEnhancementStage(
    2 model_dir="/models",
    3 model_variant="qwen",
    4 prompt_variant="default",
    5 prompt_text=None,
    6 model_batch_size=128,
    7 fp8=False,
    8 max_output_tokens=512,
    9 verbose=True,
    10)

Parameters

ParameterTypeDefaultDescription
model_dirstr"models/qwen"Directory for model weights; downloaded on each node if missing.
model_variantstr"qwen"Vision‑language model variant. One of qwen, nemotron, nemotron-bf16, nemotron-fp8, nemotron-nvfp4.
caption_batch_sizeint16Batch size for caption generation.
fp8boolFalseUse FP8 weights when available.
max_output_tokensint512Maximum number of tokens to generate per caption.
model_does_preprocessboolFalseWhether the model performs its own preprocessing.
disable_mmcacheboolFalseDisable multimodal cache for generation backends that support it.
generate_stage2_captionboolFalseEnable a second‑pass caption for refinement.
stage2_prompt_textstr | NoneNoneCustom prompt for stage‑2 caption refinement.
verboseboolFalseEmit additional logs during generation.

Preview Generation

Generate lightweight .webp previews for each caption window to support review and QA workflows. A dedicated PreviewStage reads per-window mp4 bytes and encodes WebP using ffmpeg.

Preview Parameters

  • target_fps (default 1.0): Target frames per second for preview generation.
  • target_height (default 240): Output height. Width auto-scales to preserve aspect ratio.
  • compression_level (range 0–6, default 6): WebP compression level. 0 is lossless; higher values reduce size with lower quality.
  • quality (range 0–100, default 50): WebP quality. Higher values increase quality and size.
  • num_cpus_per_worker (default 4.0): Number of CPU threads mapped to ffmpeg -threads.
  • verbose (default False): Emit more logs.

Behavior notes:

  • If the input frame rate is lower than target_fps or the input height is lower than target_height, the stage logs a warning and preview quality can degrade.
  • If ffmpeg fails, the stage logs the error and skips assigning preview bytes for that window.

Example: Configure PreviewStage

1from nemo_curator.stages.video.preview.preview import PreviewStage
2
3preview = PreviewStage(
4 target_fps=1.0,
5 target_height=240,
6 compression_level=6,
7 quality=50,
8 num_cpus_per_worker=4.0,
9 verbose=False,
10)

Outputs

The stage writes .webp files under the previews/ directory that ClipWriterStage manages. Use the helper to resolve the path:

1from nemo_curator.stages.video.io.clip_writer import ClipWriterStage
2previews_dir = ClipWriterStage.get_output_path_previews("/outputs")

Refer to Save & Export for directory structure and file locations: Save & Export.

Requirements and Troubleshooting

  • ffmpeg with WebP (libwebp) support must be available in the environment.
  • If you observe warnings about low frame rate or height, consider lowering target_fps or target_height to better match inputs.
  • On encoding errors, check logs for the ffmpeg command and output to diagnose missing encoders.