bridge.recipes.llama.llama3#
Module Contents#
Classes#
Typed options accepted by Llama3 family recipe helpers. |
|
Typed options accepted by Llama3 finetuning recipe helper functions. |
Functions#
Return a pre-training config for Llama 3.2 1B. |
|
Return a pre-training config for Llama 3.2 3B. |
|
Return a pre-training config for Llama 3 8B. |
|
Return a pre-training config for Llama 3 8B 16K. |
|
Return a pre-training config for Llama 3 8B 64K. |
|
Return a pre-training config for Llama 3 8B 128K. |
|
Return a low precision (FP8 Current Scaling/MXFP8/NVFP4) pre-training config for Llama 3 8B. |
|
Return a pre-training config for Llama 3 70B. |
|
Return a pre-training config for Llama 3 70B 16K. |
|
Return a pre-training config for Llama 3 70B 64K. |
|
Return a pre-training config for Llama 3.1 8B. |
|
Return a pre-training config for Llama 3.1 70B. |
|
Return a pre-training config for Llama 3.1 405B. |
|
Create a pre-training configuration for Llama3 family models using a given HuggingFace path. |
|
Return a finetuning config for Llama 3.2 1B. |
|
Return a finetuning config for Llama 3.2 3B. |
|
Return a finetuning config for Llama 3 8B. |
|
Return a finetuning config for Llama 3.1 8B. |
|
Return a finetuning config for Llama 3 70B. |
|
Return a finetuning config for Llama 3.1 70B. |
|
Return a finetuning config for Llama 3.1 405B. |
|
Minimal common finetuning configuration. |
Data#
API#
- class bridge.recipes.llama.llama3.Llama3CommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Llama3 family recipe helpers.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- hf_path: str#
None
- dir: str | None#
None
- name: str#
None
- data_paths: list[str] | None#
None
- data_args_path: str | None#
None
- train_data_path: list[str] | None#
None
- valid_data_path: list[str] | None#
None
- test_data_path: list[str] | None#
None
- per_split_data_args_path: str | None#
None
- mock: bool#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: torch.dtype | None#
None
- virtual_pipeline_model_parallel_size: int | None#
None
- context_parallel_size: int#
None
- sequence_parallel: bool#
None
- use_megatron_fsdp: bool#
None
- account_for_embedding_in_pipeline_split: bool#
None
- account_for_loss_in_pipeline_split: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- adam_eps: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: int | None#
None
- eval_interval: int#
None
- save_interval: int#
None
- use_null_tokenizer: bool#
None
- wandb_project: str | None#
None
- wandb_entity: str | None#
None
- wandb_exp_name: str | None#
None
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None#
None
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None#
None
- class bridge.recipes.llama.llama3.Llama3FinetuneKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Llama3 finetuning recipe helper functions.
This is separate from Llama3CommonKwargs to avoid confusion - finetuning uses SQuAD dataset by default, not the data path fields.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- dir: str | None#
None
- name: str#
None
- pretrained_checkpoint: str | None#
None
- peft: str | megatron.bridge.peft.base.PEFT | None#
None
- packed_sequence: bool#
None
- train_iters: int#
None
- global_batch_size: int | None#
None
- micro_batch_size: int#
None
- seq_length: int | None#
None
- eval_interval: int#
None
- save_interval: int#
None
- finetune_lr: float | None#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: int | None#
None
- wandb_project: str | None#
None
- wandb_entity: str | None#
None
- wandb_exp_name: str | None#
None
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None#
None
- bridge.recipes.llama.llama3.SEQUENCE_LENGTH_16K: int#
16384
- bridge.recipes.llama.llama3.SEQUENCE_LENGTH_64K: int#
65536
- bridge.recipes.llama.llama3.SEQUENCE_LENGTH_128K: int#
131072
- bridge.recipes.llama.llama3.llama32_1b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3.2 1B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama32_3b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3.2 3B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_8b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 8B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_8b_16k_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 8B 16K.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_8b_64k_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 8B 64K.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_8b_128k_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 8B 128K.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_8b_low_precision_pretrain_config(
- mixed_precision_recipe: str,
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a low precision (FP8 Current Scaling/MXFP8/NVFP4) pre-training config for Llama 3 8B.
- Parameters:
mixed_precision_recipe (str) β
The mixed precision recipe to use. Valid options are:
βbf16_with_mxfp8_mixedβ
βbf16_with_fp8_current_scaling_mixedβ
βbf16_with_nvfp4_mixedβ
user_kwargs (Unpack[Llama3CommonKwargs]) β Additional user-specified configuration options.
- Returns:
The pre-training configuration for Llama 3 8B.
- Return type:
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_70b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 70B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_70b_16k_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 70B 16K.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama3_70b_64k_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3 70B 64K.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama31_8b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3.1 8B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama31_70b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3.1 70B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3.llama31_405b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3CommonKwargs],
Return a pre-training config for Llama 3.1 405B.
See
_llama3_commonfor the full list of parameters.
- bridge.recipes.llama.llama3._llama3_common(
- hf_path: str,
- dir: str | None = None,
- name: str = 'default',
- load_weights: bool = False,
- data_paths: list[str] | None = None,
- data_args_path: str | None = None,
- train_data_path: list[str] | None = None,
- valid_data_path: list[str] | None = None,
- test_data_path: list[str] | None = None,
- per_split_data_args_path: str | None = None,
- mock: bool = False,
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: torch.dtype | None = None,
- virtual_pipeline_model_parallel_size: int | None = None,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- use_megatron_fsdp: bool = False,
- account_for_embedding_in_pipeline_split: bool = False,
- account_for_loss_in_pipeline_split: bool = False,
- train_iters: int = 1168251,
- global_batch_size: int = 512,
- micro_batch_size: int = 1,
- seq_length: int = 8192,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- adam_eps: float = 1e-05,
- lr_warmup_iters: int = 2000,
- lr_decay_iters: int | None = None,
- eval_interval: int = 2000,
- save_interval: int = 500,
- use_null_tokenizer: bool = True,
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
Create a pre-training configuration for Llama3 family models using a given HuggingFace path.
- Parameters:
hf_path (str) β HuggingFace model path (e.g., βmeta-llama/Meta-Llama-3-8Bβ).
dir (Optional[str]) β Base directory for saving logs and checkpoints.
name (str) β Name of the pre-training run.
data_paths (Optional[List[str]]) β List of paths to dataset files. If None, mock data will be used.
data_args_path (Optional[str]) β Path to file containing data arguments.
train_data_path (Optional[List[str]]) β List of training data paths.
valid_data_path (Optional[List[str]]) β List of validation data paths.
test_data_path (Optional[List[str]]) β List of test data paths.
per_split_data_args_path (Optional[str]) β Path to JSON file with per-split data configuration.
mock (bool) β Whether to use mock data. If True, ignores data_paths.
tensor_model_parallel_size (int) β Degree of tensor model parallelism.
pipeline_model_parallel_size (int) β Degree of pipeline model parallelism.
pipeline_dtype (Optional[torch.dtype]) β Data type for pipeline parallelism.
virtual_pipeline_model_parallel_size (Optional[int]) β Size of virtual pipeline parallelism.
context_parallel_size (int) β Degree of context parallelism.
sequence_parallel (bool) β Whether to use sequence parallelism.
use_megatron_fsdp (bool) β Whether to use Megatron FSDP.
account_for_embedding_in_pipeline_split (bool) β Whether to account for embedding in pipeline split.
account_for_loss_in_pipeline_split (bool) β Whether to account for loss in pipeline split.
train_iters (int) β Total number of training iterations.
global_batch_size (int) β Global batch size for training.
micro_batch_size (int) β Micro batch size for training.
seq_length (int) β Sequence length for training data.
lr (float) β Learning rate.
min_lr (float) β Minimum learning rate for cosine decay.
adam_eps (float) β AdamW epsilon.
lr_warmup_iters (int) β Number of warmup iterations for the learning rate.
lr_decay_iters (Optional[int]) β Number of iterations over which to decay the LR.
precision_config (Optional[Union[MixedPrecisionConfig, str]]) β Precision configuration for the model.
comm_overlap_config (Optional[CommOverlapConfig]) β Communication overlap configuration.
- Returns:
Configuration for pre-training.
- Return type:
- bridge.recipes.llama.llama3.llama32_1b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3.2 1B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=1, PP=1, LR=1e-4, dim=8, alpha=16
DoRA: TP=1, PP=1, LR=1e-4, dim=8, alpha=16
Full SFT (peft=None): TP=1, PP=1, LR=5e-6
- bridge.recipes.llama.llama3.llama32_3b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3.2 3B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=1, PP=1, LR=1e-4, dim=8, alpha=16
DoRA: TP=1, PP=1, LR=1e-4, dim=8, alpha=16
Full SFT (peft=None): TP=1, PP=1, LR=5e-6
- bridge.recipes.llama.llama3.llama3_8b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3 8B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=1, PP=1, LR=1e-4, dim=8, alpha=16
DoRA: TP=1, PP=1, LR=1e-4, dim=8, alpha=16
Full SFT (peft=None): TP=2, PP=1, LR=5e-6
- bridge.recipes.llama.llama3.llama31_8b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3.1 8B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=1, PP=1, LR=1e-4, dim=8, alpha=16
DoRA: TP=1, PP=1, LR=1e-4, dim=8, alpha=16
Full SFT (peft=None): TP=2, PP=1, LR=5e-6
- bridge.recipes.llama.llama3.llama3_70b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3 70B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=8, PP=1, LR=1e-4, dim=16, alpha=32
Full SFT (peft=None): TP=8, PP=4, VPP=5, LR=5e-6 (requires 4 nodes)
- bridge.recipes.llama.llama3.llama31_70b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3.1 70B.
Default configuration: 1 node, 8 GPUs, LoRA
LoRA (default): TP=8, PP=1, LR=1e-4, dim=16, alpha=32
DoRA: TP=8, PP=1, LR=1e-4, dim=16, alpha=32
Full SFT (peft=None): TP=8, PP=4, VPP=5, LR=5e-6 (requires 4 nodes)
- bridge.recipes.llama.llama3.llama31_405b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.llama.llama3.Llama3FinetuneKwargs],
Return a finetuning config for Llama 3.1 405B.
Default configuration: 4 nodes (LoRA) or 16 nodes (Full SFT), 8 GPUs per node
LoRA (default): TP=4, PP=8, VPP=8, CP=1, LR=1e-4, dim=16, alpha=32, GBS=32, SP=True Total: 32 GPUs (4 nodes) Note: 128 effective layers Γ· 8 = 16 layers/rank, VPP=8 splits into 2 layers/virtual stage
DoRA: TP=4, PP=8, VPP=8, CP=1, LR=1e-4, dim=16, alpha=32, GBS=32, SP=True Total: 32 GPUs (4 nodes)
Full SFT (peft=None): TP=8, PP=16, VPP=None, CP=1, LR=5e-6, GBS=6, SP=True Total: 128 GPUs (16 nodes) Note: 128 effective layers Γ· 16 = 8 layers/rank
- bridge.recipes.llama.llama3._llama3_finetune_common(
- hf_path: str,
- dir: str | None = None,
- name: str = 'default',
- pretrained_checkpoint: str | None = None,
- packed_sequence: bool = False,
- train_iters: int = 1000,
- global_batch_size: int | None = None,
- micro_batch_size: int = 1,
- seq_length: int | None = None,
- eval_interval: int = 30,
- save_interval: int = 50,
- finetune_lr: float = 0.0001,
- min_lr: float = 0.0,
- lr_warmup_iters: int = 50,
- lr_decay_iters: int | None = None,
- wandb_project: str | None = None,
- wandb_entity: str | None = None,
- wandb_exp_name: str | None = None,
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
Minimal common finetuning configuration.
This function provides only the basic setup. Individual model configs handle parallelism settings depending on PEFT or full SFT.