bridge.recipes.gemma3_vl.gemma3_vl#
Module Contents#
Classes#
Typed options accepted by Gemma3-VL recipe helper functions. |
Functions#
Return a fine-tuning config for Gemma3-VL 4B Instruct. |
|
Return a fine-tuning config for Gemma3-VL 12B Instruct. |
|
Return a fine-tuning config for Gemma3-VL 27B Instruct. |
|
Create a fine-tuning configuration for Gemma3-VL models using a given HuggingFace path. |
API#
- class bridge.recipes.gemma3_vl.gemma3_vl.Gemma3VLCommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Gemma3-VL recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- train_data_path: Optional[List[str]]#
None
- valid_data_path: Optional[List[str]]#
None
- test_data_path: Optional[List[str]]#
None
- dataset_type: Optional[str]#
None
- image_folder: Optional[str]#
None
- tokenizer_model: Optional[str]#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_model_parallel_size: Optional[int]#
None
- context_parallel_size: int#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- freeze_language_model: bool#
None
- freeze_vision_model: bool#
None
- freeze_vision_projection: bool#
None
- pretrained_checkpoint: Optional[str]#
None
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]]#
None
- finetune_lr: float#
None
- wandb_project: Optional[str]#
None
- wandb_entity: Optional[str]#
None
- wandb_exp_name: Optional[str]#
None
- bridge.recipes.gemma3_vl.gemma3_vl.gemma3_vl_4b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.gemma3_vl.gemma3_vl.Gemma3VLCommonKwargs],
Return a fine-tuning config for Gemma3-VL 4B Instruct.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
See
_gemma3_vl_commonfor the full list of parameters.
- bridge.recipes.gemma3_vl.gemma3_vl.gemma3_vl_12b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.gemma3_vl.gemma3_vl.Gemma3VLCommonKwargs],
Return a fine-tuning config for Gemma3-VL 12B Instruct.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=4, PP=1, LR=5e-6
See
_gemma3_vl_commonfor the full list of parameters.
- bridge.recipes.gemma3_vl.gemma3_vl.gemma3_vl_27b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.gemma3_vl.gemma3_vl.Gemma3VLCommonKwargs],
Return a fine-tuning config for Gemma3-VL 27B Instruct.
Default configuration: 2 nodes, 16 GPUs total
LoRA/DoRA: TP=4, PP=1, LR=1e-4
Full SFT: TP=8, PP=2, LR=5e-6
See
_gemma3_vl_commonfor the full list of parameters.
- bridge.recipes.gemma3_vl.gemma3_vl._gemma3_vl_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'gemma3_vl_finetune',
- pretrained_checkpoint: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- dataset_type: Optional[str] = None,
- image_folder: Optional[str] = None,
- tokenizer_model: Optional[str] = None,
- tensor_model_parallel_size: int = 2,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- train_iters: int = 300000,
- global_batch_size: int = 32,
- micro_batch_size: int = 2,
- seq_length: int = 4096,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 500,
- lr_decay_iters: Optional[int] = None,
- eval_interval: int = 500,
- save_interval: int = 500,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
- freeze_language_model: bool = False,
- freeze_vision_model: bool = False,
- freeze_vision_projection: bool = False,
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]] = None,
- finetune_lr: Optional[float] = None,
- wandb_project: Optional[str] = None,
- wandb_entity: Optional[str] = None,
- wandb_exp_name: Optional[str] = None,
Create a fine-tuning configuration for Gemma3-VL models using a given HuggingFace path.
The dataset pipeline is based on the Gemma3-VL architecture. To train multimodal tokens, ensure your preprocessed data includes appropriate image placeholders.