bridge.recipes.nemotron_vl.nemotron_nano_v2_vl#

Module Contents#

Functions#

nemotron_nano_v2_vl_12b_pretrain_config

Create a pre-training configuration for Nemotron Nano V2 VL.

nemotron_nano_v2_vl_12b_finetune_config

Create a finetuning configuration for Nemotron Nano V2 VL.

API#

bridge.recipes.nemotron_vl.nemotron_nano_v2_vl.nemotron_nano_v2_vl_12b_pretrain_config(
dir: Optional[str] = None,
name: str = 'nemotron_nano_v2_vl_pretrain',
hf_model_path: str = 'nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16',
dataset_type: Optional[str] = None,
mock: bool = False,
dataset_maker_name: str = 'make_cord_v2_dataset',
tensor_parallelism: int = 4,
pipeline_parallelism: int = 1,
pipeline_parallelism_dtype: Optional[torch.dtype] = None,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallelism: int = 1,
sequence_parallelism: bool = False,
train_iters: int = 300000,
global_batch_size: int = 32,
micro_batch_size: int = 2,
seq_length: int = 4096,
lr: float = 0.0003,
min_lr: float = 3e-05,
lr_warmup_iters: int = 500,
lr_decay_iters: Optional[int] = None,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
save_interval: Optional[int] = 200,
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for Nemotron Nano V2 VL.

Note: Current dataset pipeline is text-centric. To train multimodal tokens, your preprocessed data should include placeholder tokens (e.g., ) as needed.

bridge.recipes.nemotron_vl.nemotron_nano_v2_vl.nemotron_nano_v2_vl_12b_finetune_config(
*,
pretrained_checkpoint: str = '',
lora_on_language_model: bool = False,
lora_on_vision_model: bool = False,
save_checkpoint_dir: Optional[str] = None,
**pretrain_kwargs,
) megatron.bridge.training.config.ConfigContainer#

Create a finetuning configuration for Nemotron Nano V2 VL.

This helper wraps :func:nemotron_nano_v2_vl_12b_pretrain_config, forwarding all keyword arguments to it while additionally wiring up the :class:CheckpointConfig for finetuning from a given pretrained_checkpoint.

Parameters: pretrained_checkpoint: str Path to a Megatron-Bridge checkpoint (or a directory produced by convert_ckpt_hf_to_megatron) that will be loaded before training. save_checkpoint_dir: str | None, default run_output_dir / "checkpoints" Directory where new checkpoints will be saved / resumed from. If not provided, we reuse the default path chosen by nemotron_nano_v2_vl_12b_pretrain_config. lora_on_language_model: bool = True Whether to apply PEFT to the language model. lora_on_vision_model: bool = True Whether to apply PEFT to the vision model. **pretrain_kwargs: Any Additional keyword arguments are forwarded verbatim to :func:nemotron_nano_v2_vl_12b_pretrain_config to customise the base recipe (e.g. batch size, learning rate, parallelism).