bridge.recipes.qwen_vl.qwen25_vl#
Module Contents#
Classes#
Typed options accepted by Qwen2.5-VL recipe helper functions. |
Functions#
Return a fine-tuning config for Qwen2.5-VL 3B Instruct. |
|
Return a fine-tuning config for Qwen2.5-VL 7B Instruct. |
|
Return a fine-tuning config for Qwen2.5-VL 32B Instruct. |
|
Return a fine-tuning config for Qwen2.5-VL 72B Instruct. |
|
Create a fine-tuning configuration for Qwen2.5-VL models using a given HuggingFace path. |
API#
- class bridge.recipes.qwen_vl.qwen25_vl.Qwen25VLCommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Qwen2.5-VL recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- train_data_path: Optional[List[str]]#
None
- valid_data_path: Optional[List[str]]#
None
- test_data_path: Optional[List[str]]#
None
- dataset_type: Optional[str]#
None
- image_folder: Optional[str]#
None
- tokenizer_model: Optional[str]#
None
- tensor_parallelism: int#
None
- pipeline_parallelism: int#
None
- pipeline_parallelism_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_parallelism: Optional[int]#
None
- context_parallelism: int#
None
- sequence_parallelism: bool#
None
- use_megatron_fsdp: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- freeze_language_model: bool#
None
- freeze_vision_model: bool#
None
- freeze_vision_projection: bool#
None
- pretrained_checkpoint: Optional[str]#
None
- bridge.recipes.qwen_vl.qwen25_vl.qwen25_vl_3b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen_vl.qwen25_vl.Qwen25VLCommonKwargs],
Return a fine-tuning config for Qwen2.5-VL 3B Instruct.
See
_qwen25_vl_commonfor the full list of parameters.
- bridge.recipes.qwen_vl.qwen25_vl.qwen25_vl_7b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen_vl.qwen25_vl.Qwen25VLCommonKwargs],
Return a fine-tuning config for Qwen2.5-VL 7B Instruct.
See
_qwen25_vl_commonfor the full list of parameters.
- bridge.recipes.qwen_vl.qwen25_vl.qwen25_vl_32b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen_vl.qwen25_vl.Qwen25VLCommonKwargs],
Return a fine-tuning config for Qwen2.5-VL 32B Instruct.
See
_qwen25_vl_commonfor the full list of parameters.
- bridge.recipes.qwen_vl.qwen25_vl.qwen25_vl_72b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen_vl.qwen25_vl.Qwen25VLCommonKwargs],
Return a fine-tuning config for Qwen2.5-VL 72B Instruct.
See
_qwen25_vl_commonfor the full list of parameters.
- bridge.recipes.qwen_vl.qwen25_vl._qwen25_vl_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'qwen25_vl_finetune',
- pretrained_checkpoint: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- dataset_type: Optional[str] = None,
- image_folder: Optional[str] = None,
- tokenizer_model: Optional[str] = None,
- tensor_parallelism: int = 2,
- pipeline_parallelism: int = 1,
- pipeline_parallelism_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_parallelism: Optional[int] = None,
- context_parallelism: int = 1,
- sequence_parallelism: bool = False,
- use_megatron_fsdp: bool = False,
- train_iters: int = 300000,
- global_batch_size: int = 32,
- micro_batch_size: int = 2,
- seq_length: int = 4096,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 500,
- lr_decay_iters: Optional[int] = None,
- eval_interval: int = 500,
- save_interval: int = 500,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
- freeze_language_model: bool = False,
- freeze_vision_model: bool = False,
- freeze_vision_projection: bool = False,
Create a fine-tuning configuration for Qwen2.5-VL models using a given HuggingFace path.
The dataset pipeline is conversation-based. To train multimodal tokens, ensure your preprocessed data includes placeholders (e.g.,
) as needed.