bridge.recipes.qwen.qwen2#
Module Contents#
Classes#
Typed options accepted by Qwen2/2.5 recipe helper functions. |
|
Typed options accepted by Qwen2/2.5 finetuning recipe helper functions. |
Functions#
Return a pre-training config for Qwen2 0.5B. |
|
Return a pre-training config for Qwen2 1.5B. |
|
Return a pre-training config for Qwen2 7B. |
|
Return a pre-training config for Qwen2 72B. |
|
Return a pre-training config for Qwen2.5 0.5B. |
|
Return a pre-training config for Qwen2.5 1.5B. |
|
Return a pre-training config for Qwen2.5 7B. |
|
Return a pre-training config for Qwen2.5 14B. |
|
Return a pre-training config for Qwen2.5 32B. |
|
Return a pre-training config for Qwen2.5 72B. |
|
Create a pre-training configuration for Qwen2/Qwen2.5 models using a given HuggingFace path. |
|
Return a finetuning config for Qwen2 500M. |
|
Return a finetuning config for Qwen2 1.5B. |
|
Return a finetuning config for Qwen2 7B. |
|
Return a finetuning config for Qwen2 72B. |
|
Return a finetuning config for Qwen2.5 500M. |
|
Return a finetuning config for Qwen2.5 1.5B. |
|
Return a finetuning config for Qwen2.5 7B. |
|
Return a finetuning config for Qwen2.5 14B. |
|
Return a finetuning config for Qwen2.5 32B. |
|
Return a finetuning config for Qwen2.5 72B. |
|
Common finetuning configuration for all Qwen2/2.5 models. |
API#
- class bridge.recipes.qwen.qwen2.Qwen2CommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Qwen2/2.5 recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- data_paths: Optional[List[str]]#
None
- data_args_path: Optional[str]#
None
- train_data_path: Optional[List[str]]#
None
- valid_data_path: Optional[List[str]]#
None
- test_data_path: Optional[List[str]]#
None
- per_split_data_args_path: Optional[str]#
None
- mock: bool#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: Optional[torch.dtype]#
None
- virtual_pipeline_model_parallel_size: Optional[int]#
None
- context_parallel_size: int#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- check_for_nan_in_grad: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- use_null_tokenizer: bool#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#
None
- bridge.recipes.qwen.qwen2.qwen2_500m_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2 0.5B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen2_1p5b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2 1.5B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen2_7b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2 7B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen2_72b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2 72B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_500m_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 0.5B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_1p5b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 1.5B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_7b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 7B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_14b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 14B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_32b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 32B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2.qwen25_72b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
Return a pre-training config for Qwen2.5 72B.
See
_qwen2_commonfor the full list of parameters.
- bridge.recipes.qwen.qwen2._qwen2_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'default',
- data_paths: Optional[List[str]] = None,
- data_args_path: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- per_split_data_args_path: Optional[str] = None,
- mock: bool = False,
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- check_for_nan_in_grad: bool = False,
- train_iters: int = 300000,
- global_batch_size: int = 32,
- micro_batch_size: int = 2,
- seq_length: int = 4096,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 500,
- lr_decay_iters: Optional[int] = None,
- eval_interval: int = 500,
- save_interval: int = 500,
- use_null_tokenizer: bool = True,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
- comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
Create a pre-training configuration for Qwen2/Qwen2.5 models using a given HuggingFace path.
- Parameters:
hf_path (str) – HuggingFace model path (e.g., “Qwen/Qwen2-1.5B”, “Qwen/Qwen2.5-7B”).
dir (Optional[str]) – Base directory for saving logs and checkpoints.
name (str) – Name of the pre-training run.
data_paths (Optional[List[str]]) – List of paths to dataset files. If None, mock data will be used.
data_args_path (Optional[str]) – Path to file containing data arguments.
train_data_path (Optional[List[str]]) – List of training data paths.
valid_data_path (Optional[List[str]]) – List of validation data paths.
test_data_path (Optional[List[str]]) – List of test data paths.
per_split_data_args_path (Optional[str]) – Path to JSON file with per-split data configuration.
mock (bool) – Whether to use mock data. If True, ignores data_paths.
tensor_model_parallel_size (int) – Degree of tensor model parallelism.
pipeline_model_parallel_size (int) – Degree of pipeline model parallelism.
pipeline_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism.
virtual_pipeline_model_parallel_size (Optional[int]) – Size of virtual pipeline parallelism.
context_parallel_size (int) – Degree of context parallelism to be passed to model_config.
sequence_parallel (bool) – Whether to use sequence parallelism.
use_megatron_fsdp (bool) – Whether to use Megatron FSDP.
check_for_nan_in_grad (bool) – Whether to check for NaN in gradients.
train_iters (int) – Total number of training iterations.
global_batch_size (int) – Global batch size for training.
micro_batch_size (int) – Micro batch size for training.
seq_length (int) – Sequence length for training data.
lr (float) – Learning rate.
min_lr (float) – Minimum learning rate for cosine decay.
lr_warmup_iters (int) – Number of warmup iterations for the learning rate.
lr_decay_iters (Optional[int]) – Number of iterations over which to decay the LR.
precision_config (Optional[Union[MixedPrecisionConfig, str]]) – Precision configuration for the model.
comm_overlap_config (Optional[CommOverlapConfig]) – Communication overlap configuration.
- Returns:
Configuration for pre-training.
- Return type:
- class bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Qwen2/2.5 finetuning recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: Optional[str]#
None
- name: str#
None
- pretrained_checkpoint: Optional[str]#
None
- peft: Union[str, megatron.bridge.peft.base.PEFT, None]#
None
- packed_sequence: bool#
None
- train_iters: int#
None
- global_batch_size: Optional[int]#
None
- micro_batch_size: int#
None
- seq_length: Optional[int]#
None
- eval_interval: int#
None
- save_interval: int#
None
- finetune_lr: Optional[float]#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: Optional[int]#
None
- wandb_project: Optional[str]#
None
- wandb_entity: Optional[str]#
None
- wandb_exp_name: Optional[str]#
None
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#
None
- bridge.recipes.qwen.qwen2.qwen2_500m_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2 500M.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen2_1p5b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2 1.5B.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen2_7b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2 7B.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=2, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen2_72b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2 72B.
Default configuration: 4 nodes (SFT) or 1 node (LoRA), 8 GPUs per node
LoRA/DoRA: TP=8, PP=1, LR=1e-4
Full SFT: TP=8, PP=4, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_500m_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 500M.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_1p5b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 1.5B.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_7b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 7B.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=2, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_14b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 14B.
Default configuration: 1 node, 8 GPUs
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=4, PP=1, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_32b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 32B.
Default configuration: 2 nodes (SFT) or 1 node (LoRA), 8 GPUs per node
LoRA/DoRA: TP=8, PP=1, LR=1e-4
Full SFT: TP=8, PP=2, LR=5e-6
- bridge.recipes.qwen.qwen2.qwen25_72b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2FinetuneKwargs],
Return a finetuning config for Qwen2.5 72B.
Default configuration: 4 nodes (SFT) or 1 node (LoRA), 8 GPUs per node
LoRA/DoRA: TP=8, PP=1, LR=1e-4
Full SFT: TP=8, PP=4, LR=5e-6
- bridge.recipes.qwen.qwen2._qwen2_finetune_common(
- hf_path: str,
- dir: Optional[str] = None,
- name: str = 'default',
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: Optional[torch.dtype] = None,
- virtual_pipeline_model_parallel_size: Optional[int] = None,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- pretrained_checkpoint: Optional[str] = None,
- peft: Union[str, megatron.bridge.peft.base.PEFT, None] = 'lora',
- packed_sequence: bool = False,
- train_iters: int = 100,
- global_batch_size: Optional[int] = None,
- micro_batch_size: int = 1,
- seq_length: Optional[int] = None,
- eval_interval: int = 50,
- save_interval: int = 100,
- finetune_lr: Optional[float] = None,
- min_lr: float = 0.0,
- lr_warmup_iters: int = 10,
- lr_decay_iters: Optional[int] = None,
- wandb_project: Optional[str] = None,
- wandb_entity: Optional[str] = None,
- wandb_exp_name: Optional[str] = None,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = None,
Common finetuning configuration for all Qwen2/2.5 models.