bridge.recipes.qwen.qwen2#

Module Contents#

Classes#

Qwen2CommonKwargs

Typed options accepted by Qwen2/2.5 recipe helper functions.

Functions#

qwen2_500m_pretrain_config

Return a pre-training config for Qwen2 0.5B.

qwen2_1p5b_pretrain_config

Return a pre-training config for Qwen2 1.5B.

qwen2_7b_pretrain_config

Return a pre-training config for Qwen2 7B.

qwen2_72b_pretrain_config

Return a pre-training config for Qwen2 72B.

qwen25_500m_pretrain_config

Return a pre-training config for Qwen2.5 0.5B.

qwen25_1p5b_pretrain_config

Return a pre-training config for Qwen2.5 1.5B.

qwen25_7b_pretrain_config

Return a pre-training config for Qwen2.5 7B.

qwen25_14b_pretrain_config

Return a pre-training config for Qwen2.5 14B.

qwen25_32b_pretrain_config

Return a pre-training config for Qwen2.5 32B.

qwen25_72b_pretrain_config

Return a pre-training config for Qwen2.5 72B.

_qwen2_common

Create a pre-training configuration for Qwen2/Qwen2.5 models using a given HuggingFace path.

API#

class bridge.recipes.qwen.qwen2.Qwen2CommonKwargs#

Bases: typing_extensions.TypedDict

Typed options accepted by Qwen2/2.5 recipe helper functions.

Initialization

Initialize self. See help(type(self)) for accurate signature.

hf_path: str#

None

dir: Optional[str]#

None

name: str#

None

data_paths: Optional[List[str]]#

None

data_args_path: Optional[str]#

None

train_data_path: Optional[List[str]]#

None

valid_data_path: Optional[List[str]]#

None

test_data_path: Optional[List[str]]#

None

per_split_data_args_path: Optional[str]#

None

mock: bool#

None

tensor_parallelism: int#

None

pipeline_parallelism: int#

None

pipeline_parallelism_dtype: Optional[torch.dtype]#

None

virtual_pipeline_parallelism: Optional[int]#

None

context_parallelism: int#

None

sequence_parallelism: bool#

None

use_megatron_fsdp: bool#

None

check_for_nan_in_grad: bool#

None

train_iters: int#

None

global_batch_size: int#

None

micro_batch_size: int#

None

seq_length: int#

None

lr: float#

None

min_lr: float#

None

lr_warmup_iters: int#

None

lr_decay_iters: Optional[int]#

None

eval_interval: int#

None

save_interval: int#

None

use_null_tokenizer: bool#

None

precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#

None

comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#

None

bridge.recipes.qwen.qwen2.qwen2_500m_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2 0.5B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen2_1p5b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2 1.5B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen2_7b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2 7B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen2_72b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2 72B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_500m_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 0.5B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_1p5b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 1.5B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_7b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 7B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_14b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 14B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_32b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 32B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2.qwen25_72b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.qwen.qwen2.Qwen2CommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Qwen2.5 72B.

See _qwen2_common for the full list of parameters.

bridge.recipes.qwen.qwen2._qwen2_common(
hf_path: str,
dir: Optional[str] = None,
name: str = 'default',
data_paths: Optional[List[str]] = None,
data_args_path: Optional[str] = None,
train_data_path: Optional[List[str]] = None,
valid_data_path: Optional[List[str]] = None,
test_data_path: Optional[List[str]] = None,
per_split_data_args_path: Optional[str] = None,
mock: bool = False,
tensor_parallelism: int = 1,
pipeline_parallelism: int = 1,
pipeline_parallelism_dtype: Optional[torch.dtype] = None,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallelism: int = 1,
sequence_parallelism: bool = False,
use_megatron_fsdp: bool = False,
check_for_nan_in_grad: bool = False,
train_iters: int = 300000,
global_batch_size: int = 32,
micro_batch_size: int = 2,
seq_length: int = 4096,
lr: float = 0.0003,
min_lr: float = 3e-05,
lr_warmup_iters: int = 500,
lr_decay_iters: Optional[int] = None,
eval_interval: int = 500,
save_interval: int = 500,
use_null_tokenizer: bool = True,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for Qwen2/Qwen2.5 models using a given HuggingFace path.

Parameters:
  • hf_path (str) – HuggingFace model path (e.g., β€œQwen/Qwen2-1.5B”, β€œQwen/Qwen2.5-7B”).

  • dir (Optional[str]) – Base directory for saving logs and checkpoints.

  • name (str) – Name of the pre-training run.

  • data_paths (Optional[List[str]]) – List of paths to dataset files. If None, mock data will be used.

  • data_args_path (Optional[str]) – Path to file containing data arguments.

  • train_data_path (Optional[List[str]]) – List of training data paths.

  • valid_data_path (Optional[List[str]]) – List of validation data paths.

  • test_data_path (Optional[List[str]]) – List of test data paths.

  • per_split_data_args_path (Optional[str]) – Path to JSON file with per-split data configuration.

  • mock (bool) – Whether to use mock data. If True, ignores data_paths.

  • tensor_parallelism (int) – Degree of tensor model parallelism.

  • pipeline_parallelism (int) – Degree of pipeline model parallelism.

  • pipeline_parallelism_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism.

  • virtual_pipeline_parallelism (Optional[int]) – Size of virtual pipeline parallelism.

  • context_parallelism (int) – Degree of context parallelism to be passed to model_config.

  • sequence_parallelism (bool) – Whether to use sequence parallelism.

  • use_megatron_fsdp (bool) – Whether to use Megatron FSDP.

  • check_for_nan_in_grad (bool) – Whether to check for NaN in gradients.

  • train_iters (int) – Total number of training iterations.

  • global_batch_size (int) – Global batch size for training.

  • micro_batch_size (int) – Micro batch size for training.

  • seq_length (int) – Sequence length for training data.

  • lr (float) – Learning rate.

  • min_lr (float) – Minimum learning rate for cosine decay.

  • lr_warmup_iters (int) – Number of warmup iterations for the learning rate.

  • lr_decay_iters (Optional[int]) – Number of iterations over which to decay the LR.

  • precision_config (Optional[Union[MixedPrecisionConfig, str]]) – Precision configuration for the model.

  • comm_overlap_config (Optional[CommOverlapConfig]) – Communication overlap configuration.

Returns:

Configuration for pre-training.

Return type:

ConfigContainer