bridge.recipes.llama.llama3_70b_16k#
Module Contents#
Functions#
Configure the Llama3 70B model for 16k sequence length training. |
|
Create a pre-training configuration for Llama3 70B model with 16k sequence length. |
Data#
API#
- bridge.recipes.llama.llama3_70b_16k.SEQUENCE_LENGTH_16K#
16384
- bridge.recipes.llama.llama3_70b_16k.model_config(
- tensor_parallelism: int = 8,
- pipeline_parallelism: int = 2,
- pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
- virtual_pipeline_parallelism: Optional[int] = None,
- context_parallelism: int = 2,
- sequence_parallelism: bool = True,
Configure the Llama3 70B model for 16k sequence length training.
- Parameters:
tensor_parallelism (int) – Degree of tensor model parallelism. Default optimized for 70B with 16k sequences.
pipeline_parallelism (int) – Degree of pipeline model parallelism. Default optimized for 70B with 16k sequences.
pipeline_parallelism_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism. Default optimized for 70B with 16k sequences.
virtual_pipeline_parallelism (Optional[int]) – Size of virtual pipeline parallelism. Default optimized for 70B with 16k sequences.
context_parallelism (int) – Degree of context parallelism. Default optimized for 70B with 16k sequences.
sequence_parallelism (bool) – Whether to use sequence parallelism. Default optimized for 70B with 16k sequences.
- Returns:
Configuration for the Llama3 70B model optimized for 16k sequences.
- Return type:
- bridge.recipes.llama.llama3_70b_16k.pretrain_config(
- dir: Optional[str] = None,
- name: str = 'default',
- data_paths: Optional[List[str]] = None,
- data_args_path: Optional[str] = None,
- train_data_path: Optional[List[str]] = None,
- valid_data_path: Optional[List[str]] = None,
- test_data_path: Optional[List[str]] = None,
- per_split_data_args_path: Optional[str] = None,
- mock: bool = False,
- tensor_parallelism: int = 8,
- pipeline_parallelism: int = 2,
- pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
- virtual_pipeline_parallelism: Optional[int] = None,
- context_parallelism: int = 2,
- sequence_parallelism: bool = True,
- train_iters: int = 1168251,
- global_batch_size: int = 512,
- micro_batch_size: int = 1,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 2000,
- precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
Create a pre-training configuration for Llama3 70B model with 16k sequence length.
This function inherits from llama3_70b.pretrain_config() and overrides specific parameters optimized for 16k sequence length training.
- Parameters:
dir (Optional[str]) – Base directory for saving logs and checkpoints.
name (str) – Name of the pre-training run.
data_paths (Optional[List[str]]) – List of paths to dataset files. If None, mock data will be used.
data_args_path (Optional[str]) – Path to file containing data arguments.
train_data_path (Optional[List[str]]) – List of training data paths.
valid_data_path (Optional[List[str]]) – List of validation data paths.
test_data_path (Optional[List[str]]) – List of test data paths.
per_split_data_args_path (Optional[str]) – Path to JSON file with per-split data configuration.
mock (bool) – Whether to use mock data. If True, ignores data_paths.
tensor_parallelism (int) – Degree of tensor model parallelism. Default optimized for 70B with 16k sequences.
pipeline_parallelism (int) – Degree of pipeline model parallelism. Default optimized for 70B with 16k sequences.
pipeline_parallelism_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism. Default optimized for 70B with 16k sequences.
virtual_pipeline_parallelism (Optional[int]) – Size of virtual pipeline parallelism. Default optimized for 70B with 16k sequences.
context_parallelism (int) – Degree of context parallelism. Default optimized for 70B with 16k sequences.
sequence_parallelism (bool) – Whether to use sequence parallelism. Default optimized for 70B with 16k sequences.
train_iters (int) – Total number of training iterations.
global_batch_size (int) – Global batch size for training.
micro_batch_size (int) – Micro batch size for training.
lr (float) – Learning rate.
min_lr (float) – Minimum learning rate for cosine decay.
lr_warmup_iters (int)
precision_config (Optional[Union[MixedPrecisionConfig, str]]) – Precision configuration for the model.
- Returns:
Configuration for pre-training.
- Return type:
.. note::
Sequence length is set to SEQUENCE_LENGTH_16K (16384) for extended sequence training. Default parallelism settings are optimized for 70B model with 16k sequences efficiently.