bridge.recipes.glm_vl.glm_45v#
Module Contents#
Classes#
Typed options accepted by GLM-4.5V recipe helper functions. |
Functions#
Set the GLM-4.5V pipeline model parallel layout. |
|
Return a fine-tuning config for GLM-4.5V (based on GLM-4.5 Air 106B). |
|
Create a fine-tuning configuration for GLM-4.5V models using a given HuggingFace path. |
API#
- bridge.recipes.glm_vl.glm_45v.set_glm_45v_pipeline_model_parallel_layout(
- model_cfg: megatron.bridge.models.gpt_provider.GPTModelProvider,
- layout: Optional[Union[str, List[List[str]]]] = None,
- is_peft: bool = False,
Set the GLM-4.5V pipeline model parallel layout.
GLM-4.5V (based on GLM-4.5 Air) has 46 decoder layers and no MTP layers. This function sets up predefined layouts for common PP/VP combinations.
- Parameters:
model_cfg – The model provider configuration to modify.
layout – Optional custom layout. If None, uses predefined layouts based on PP/VP sizes.
is_peft – Whether the model is trained with PEFT.
- class bridge.recipes.glm_vl.glm_45v.GLM45VCommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by GLM-4.5V recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- train_data_path: Optional[List[str]]#
None
- valid_data_path: Optional[List[str]]#
None
- test_data_path: Optional[List[str]]#
None
- dataset_type: Optional[str]#
None
- image_folder: Optional[str]#
None
- tokenizer_model: Optional[str]#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_model_parallel_size: Optional[int]#
None
- expert_model_parallel_size: int#
None
- context_parallel_size: int#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- freeze_language_model: bool#
None
- freeze_vision_model: bool#
None
- freeze_vision_projection: bool#
None
- pretrained_checkpoint: Optional[str]#
None
- layout: Optional[Union[str, List[List[str]]]]#
None
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]]#
None
- finetune_lr: float#
None
- wandb_project: Optional[str]#
None
- wandb_entity: Optional[str]#
None
- wandb_exp_name: Optional[str]#
None
- bridge.recipes.glm_vl.glm_45v.glm_45v_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.glm_vl.glm_45v.GLM45VCommonKwargs],
Return a fine-tuning config for GLM-4.5V (based on GLM-4.5 Air 106B).
Default configuration:
LoRA/DoRA: TP=1, PP=8, EP=4 (64 GPUs, 8 nodes), LR=1e-4
Full SFT: TP=1, PP=8, EP=16 (512 GPUs, 64 nodes), LR=5e-6
GLM-4.5V is a Vision-Language model with:
106B total parameters (based on GLM-4.5 Air)
Sparse MoE with shared experts
Multi-modality support for images and videos
See
_glm_45v_commonfor the full list of parameters.
- bridge.recipes.glm_vl.glm_45v._glm_45v_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'glm_45v_finetune',
- pretrained_checkpoint: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- dataset_type: Optional[str] = None,
- image_folder: Optional[str] = None,
- tokenizer_model: Optional[str] = None,
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 2,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- expert_model_parallel_size: int = 4,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- train_iters: int = 300000,
- global_batch_size: int = 32,
- micro_batch_size: int = 1,
- seq_length: int = 8192,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 500,
- lr_decay_iters: Optional[int] = None,
- eval_interval: int = 500,
- save_interval: int = 500,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
- freeze_language_model: bool = False,
- freeze_vision_model: bool = False,
- freeze_vision_projection: bool = False,
- layout: Optional[Union[str, List[List[str]]]] = None,
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]] = None,
- finetune_lr: Optional[float] = None,
- wandb_project: Optional[str] = None,
- wandb_entity: Optional[str] = None,
- wandb_exp_name: Optional[str] = None,
Create a fine-tuning configuration for GLM-4.5V models using a given HuggingFace path.
The dataset pipeline is conversation-based. To train multimodal tokens, ensure your preprocessed data includes placeholders (e.g.,
) as needed. GLM-4.5V is a Vision-Language model based on GLM-4.5 Air (106B parameters) with:
Sparse MoE architecture with shared experts
Multi-modal support for images and videos
MRoPE (Multi-Resolution Rotary Position Embedding)