bridge.recipes.qwen.qwen2#
Module Contents#
Functions#
Return a pre-training config for Qwen2 0.5B. |
|
Return a pre-training config for Qwen2 1.5B. |
|
Return a pre-training config for Qwen2 7B. |
|
Return a pre-training config for Qwen2 72B. |
|
Return a pre-training config for Qwen2.5 0.5B. |
|
Return a pre-training config for Qwen2.5 1.5B. |
|
Return a pre-training config for Qwen2.5 7B. |
|
Return a pre-training config for Qwen2.5 14B. |
|
Return a pre-training config for Qwen2.5 32B. |
|
Return a pre-training config for Qwen2.5 72B. |
|
Return a full SFT config for Qwen2 500M. |
|
Return a full SFT config for Qwen2 1.5B. |
|
Return a full SFT config for Qwen2 7B. |
|
Return a full SFT config for Qwen2 72B. |
|
Return a full SFT config for Qwen2.5 500M. |
|
Return a full SFT config for Qwen2.5 1.5B. |
|
Return a full SFT config for Qwen2.5 7B. |
|
Return a full SFT config for Qwen2.5 14B. |
|
Return a full SFT config for Qwen2.5 32B. |
|
Return a full SFT config for Qwen2.5 72B. |
|
Return a PEFT config for Qwen2 500M. |
|
Return a PEFT config for Qwen2 1.5B. |
|
Return a PEFT config for Qwen2 7B. |
|
Return a PEFT config for Qwen2 72B. |
|
Return a PEFT config for Qwen2.5 500M. |
|
Return a PEFT config for Qwen2.5 1.5B. |
|
Return a PEFT config for Qwen2.5 7B. |
|
Return a PEFT config for Qwen2.5 14B. |
|
Return a PEFT config for Qwen2.5 32B. |
|
Return a PEFT config for Qwen2.5 72B. |
API#
- bridge.recipes.qwen.qwen2.qwen2_500m_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2 0.5B.
Recommended parallelism: TP=1, PP=1 (fits on a single GPU).
- bridge.recipes.qwen.qwen2.qwen2_1p5b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2 1.5B.
Recommended parallelism: TP=1, PP=1 (fits on a single GPU).
- bridge.recipes.qwen.qwen2.qwen2_7b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2 7B.
Recommended parallelism: TP=2, PP=1.
- bridge.recipes.qwen.qwen2.qwen2_72b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2 72B.
Recommended parallelism: TP=8, PP=4.
- bridge.recipes.qwen.qwen2.qwen25_500m_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 0.5B.
Recommended parallelism: TP=1, PP=1 (fits on a single GPU).
- bridge.recipes.qwen.qwen2.qwen25_1p5b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 1.5B.
Recommended parallelism: TP=1, PP=1 (fits on a single GPU).
- bridge.recipes.qwen.qwen2.qwen25_7b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 7B.
Recommended parallelism: TP=2, PP=1.
- bridge.recipes.qwen.qwen2.qwen25_14b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 14B.
Recommended parallelism: TP=4, PP=1.
- bridge.recipes.qwen.qwen2.qwen25_32b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 32B.
Recommended parallelism: TP=8, PP=2.
- bridge.recipes.qwen.qwen2.qwen25_72b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training config for Qwen2.5 72B.
Recommended parallelism: TP=8, PP=4.
- bridge.recipes.qwen.qwen2.qwen2_500m_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2 500M.
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_1p5b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2 1.5B.
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_7b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2 7B.
Recommended parallelism: TP=2, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_72b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2 72B.
Recommended parallelism: TP=8, PP=4 (4 nodes, 32 GPUs total)
- bridge.recipes.qwen.qwen2.qwen25_500m_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 500M.
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_1p5b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 1.5B.
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_7b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 7B.
Recommended parallelism: TP=2, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_14b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 14B.
Recommended parallelism: TP=4, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_32b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 32B.
Recommended parallelism: TP=8, PP=2 (2 nodes, 16 GPUs total)
- bridge.recipes.qwen.qwen2.qwen25_72b_sft_config() megatron.bridge.training.config.ConfigContainer#
Return a full SFT config for Qwen2.5 72B.
Recommended parallelism: TP=8, PP=4 (4 nodes, 32 GPUs total)
- bridge.recipes.qwen.qwen2.qwen2_500m_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2 500M.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_1p5b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2 1.5B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_7b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2 7B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen2_72b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2 72B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=8, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_500m_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 500M.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_1p5b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 1.5B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_7b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 7B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_14b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 14B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=1, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_32b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 32B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=8, PP=1 (1 node, 8 GPUs)
- bridge.recipes.qwen.qwen2.qwen25_72b_peft_config(
- peft_scheme: str | megatron.bridge.peft.base.PEFT = 'lora',
Return a PEFT config for Qwen2.5 72B.
- Parameters:
peft_scheme – PEFT scheme - ‘lora’, ‘dora’, or a PEFT instance. Default: ‘lora’
Recommended parallelism: TP=8, PP=1 (1 node, 8 GPUs)