bridge.recipes.glm.glm45#
Module Contents#
Classes#
Typed options accepted by GLM 4.5 recipe helpers. |
|
Typed options accepted by GLM 4.5 finetune recipe helpers. |
Functions#
Return a pre-training config for GLM 4.5 355B-A32B variant. |
|
Return a pre-training config for GLM 4.5 Air 106B-A12B variant. |
|
Create a pre-training configuration for GLM 4.5 family models using a given HuggingFace path. Mirrors the structure used in gpt_oss recipes for consistency. |
|
Return a finetuning config for GLM 4.5 355B-A32B variant. |
|
Return a finetuning config for GLM 4.5 Air 106B-A12B variant. |
|
Common finetuning configuration for GLM 4.5 models using a given HuggingFace path. |
API#
- class bridge.recipes.glm.glm45.GLM45CommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by GLM 4.5 recipe helpers.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- data_paths: Optional[List[str]]#
None
- data_args_path: Optional[str]#
None
- train_data_path: Optional[List[str]]#
None
- valid_data_path: Optional[List[str]]#
None
- test_data_path: Optional[List[str]]#
None
- per_split_data_args_path: Optional[str]#
None
- mock: bool#
None
- dataset: Optional[Union[megatron.bridge.training.config.GPTDatasetConfig, megatron.bridge.training.config.FinetuningDatasetConfig, megatron.bridge.training.config.DatasetProvider]]#
None
- num_layers: int#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_model_parallel_size: Optional[int]#
None
- context_parallel_size: int#
None
- expert_model_parallel_size: Optional[int]#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- account_for_embedding_in_pipeline_split: bool#
None
- account_for_loss_in_pipeline_split: bool#
None
- cp_comm_type: Optional[str]#
None
- recompute_granularity: Optional[str]#
None
- recompute_modules: Optional[List[str]]#
None
- recompute_method: Optional[str]#
None
- recompute_num_layers: Optional[int]#
None
- mtp_num_layers: Optional[int]#
None
- mtp_loss_scaling_factor: Optional[float]#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- use_null_tokenizer: bool#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- pretrained_checkpoint: Optional[str]#
None
- class bridge.recipes.glm.glm45.GLM45FinetuneKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by GLM 4.5 finetune recipe helpers.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_model_parallel_size: Optional[int]#
None
- context_parallel_size: int#
None
- expert_model_parallel_size: Optional[int]#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- pretrained_checkpoint: Optional[str]#
None
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]]#
None
- packed_sequence: bool#
None
- train_iters: int#
None
- global_batch_size: Optional[int]#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- finetune_lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- wandb_project: Optional[str]#
None
- wandb_entity: Optional[str]#
None
- wandb_exp_name: Optional[str]#
None
- bridge.recipes.glm.glm45.glm45_355b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.glm.glm45.GLM45CommonKwargs],
Return a pre-training config for GLM 4.5 355B-A32B variant.
- bridge.recipes.glm.glm45.glm45_air_106b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.glm.glm45.GLM45CommonKwargs],
Return a pre-training config for GLM 4.5 Air 106B-A12B variant.
- bridge.recipes.glm.glm45._glm45_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'default',
- data_paths: Optional[List[str]] = None,
- data_args_path: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- per_split_data_args_path: Optional[str] = None,
- mock: bool = False,
- dataset: Optional[Union[megatron.bridge.training.config.GPTDatasetConfig, megatron.bridge.training.config.FinetuningDatasetConfig, megatron.bridge.training.config.DatasetProvider]] = None,
- num_layers: int = None,
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- context_parallel_size: int = 1,
- expert_model_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- account_for_embedding_in_pipeline_split: bool = False,
- account_for_loss_in_pipeline_split: bool = False,
- cp_comm_type: Optional[str] = None,
- recompute_granularity: Optional[str] = None,
- recompute_modules: Optional[List[str]] = None,
- recompute_method: Optional[str] = None,
- recompute_num_layers: Optional[int] = None,
- mtp_num_layers: Optional[int] = 1,
- mtp_loss_scaling_factor: Optional[float] = 0.3,
- train_iters: int = 1000000,
- global_batch_size: int = 2048,
- micro_batch_size: int = 1,
- seq_length: int = 4096,
- lr: float = 0.0001,
- min_lr: float = 1e-05,
- lr_warmup_iters: int = 2000,
- lr_decay_iters: Optional[int] = None,
- eval_interval: int = 2000,
- save_interval: int = 500,
- use_null_tokenizer: bool = True,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
- pretrained_checkpoint: Optional[str] = None,
Create a pre-training configuration for GLM 4.5 family models using a given HuggingFace path. Mirrors the structure used in gpt_oss recipes for consistency.
- bridge.recipes.glm.glm45.glm45_355b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.glm.glm45.GLM45FinetuneKwargs],
Return a finetuning config for GLM 4.5 355B-A32B variant.
Default configuration:
LoRA/DoRA: TP=2, PP=4, EP=4 (32 GPUs), LR=1e-4
Full SFT: TP=2, PP=8, EP=16 (256 GPUs, same as pretrain), LR=5e-6
- bridge.recipes.glm.glm45.glm45_air_106b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.glm.glm45.GLM45FinetuneKwargs],
Return a finetuning config for GLM 4.5 Air 106B-A12B variant.
Default configuration:
LoRA/DoRA: TP=1, PP=2, EP=4 (8 GPUs, 1 node), LR=1e-4
Full SFT: TP=1, PP=4, EP=8 (32 GPUs, same as pretrain), LR=5e-6
- bridge.recipes.glm.glm45._glm45_finetune_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'default',
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- context_parallel_size: int = 1,
- expert_model_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- pretrained_checkpoint: Optional[str] = None,
- peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]] = 'lora',
- packed_sequence: bool = False,
- train_iters: int = 1000,
- global_batch_size: int = 128,
- micro_batch_size: int = 1,
- seq_length: int = 2048,
- eval_interval: int = 50,
- save_interval: int = 50,
- finetune_lr: float = 0.0001,
- min_lr: float = 0.0,
- lr_warmup_iters: int = 50,
- lr_decay_iters: Optional[int] = None,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
- wandb_project: Optional[str] = None,
- wandb_entity: Optional[str] = None,
- wandb_exp_name: Optional[str] = None,
Common finetuning configuration for GLM 4.5 models using a given HuggingFace path.