bridge.recipes.llama.llama3_8b_64k#

Module Contents#

Functions#

model_config

Configure the Llama3 8B model for 64k sequence length training.

pretrain_config

Create a pre-training configuration for Llama3 8B model with 64k sequence length.

Data#

API#

bridge.recipes.llama.llama3_8b_64k.SEQUENCE_LENGTH_64K: int#

65536

bridge.recipes.llama.llama3_8b_64k.model_config(
tensor_parallelism: int = 4,
pipeline_parallelism: int = 2,
pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallelism: int = 4,
sequence_parallelism: bool = True,
) megatron.bridge.models.llama.Llama3ModelProvider8B#

Configure the Llama3 8B model for 64k sequence length training.

Parameters:
  • tensor_parallelism (int) – Degree of tensor model parallelism. Default optimized for 64k sequences.

  • pipeline_parallelism (int) – Degree of pipeline model parallelism. Default optimized for 64k sequences.

  • pipeline_parallelism_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism. Default optimized for 64k sequences.

  • virtual_pipeline_parallelism (Optional[int]) – Size of virtual pipeline parallelism.

  • context_parallelism (int) – Degree of context parallelism. Default optimized for 64k sequences.

  • sequence_parallelism (bool) – Whether to use sequence parallelism. Default optimized for 64k sequences.

Returns:

Configuration for the Llama3 8B model optimized for 64k sequences.

Return type:

Llama3ModelProvider8B

bridge.recipes.llama.llama3_8b_64k.pretrain_config(
dir: Optional[str] = None,
name: str = 'default',
data_paths: Optional[List[str]] = None,
data_args_path: Optional[str] = None,
train_data_path: Optional[List[str]] = None,
valid_data_path: Optional[List[str]] = None,
test_data_path: Optional[List[str]] = None,
per_split_data_args_path: Optional[str] = None,
mock: bool = False,
tensor_parallelism: int = 4,
pipeline_parallelism: int = 2,
pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallelism: int = 4,
sequence_parallelism: bool = True,
train_iters: int = 1168251,
global_batch_size: int = 512,
micro_batch_size: int = 1,
lr: float = 0.0003,
min_lr: float = 3e-05,
lr_warmup_iters: int = 2000,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for Llama3 8B model with 64k sequence length.

Parameters:
  • dir (Optional[str]) – Base directory for saving logs and checkpoints.

  • name (str) – Name of the pre-training run.

  • data_paths (Optional[List[str]]) – List of paths to dataset files. If None, mock data will be used.

  • data_args_path (Optional[str]) – Path to file containing data arguments.

  • train_data_path (Optional[List[str]]) – List of training data paths.

  • valid_data_path (Optional[List[str]]) – List of validation data paths.

  • test_data_path (Optional[List[str]]) – List of test data paths.

  • per_split_data_args_path (Optional[str]) – Path to JSON file with per-split data configuration.

  • mock (bool) – Whether to use mock data. If True, ignores data_paths.

  • tensor_parallelism (int) – Degree of tensor model parallelism. Default optimized for 64k sequences.

  • pipeline_parallelism (int) – Degree of pipeline model parallelism. Default optimized for 64k sequences.

  • pipeline_parallelism_dtype (Optional[torch.dtype]) – Data type for pipeline parallelism. Default optimized for 64k sequences.

  • virtual_pipeline_parallelism (Optional[int]) – Size of virtual pipeline parallelism.

  • context_parallelism (int) – Degree of context parallelism. Default optimized for 64k sequences.

  • sequence_parallelism (bool) – Whether to use sequence parallelism. Default optimized for 64k sequences.

  • train_iters (int) – Total number of training iterations.

  • global_batch_size (int) – Global batch size for training.

  • micro_batch_size (int) – Micro batch size for training.

  • lr (float) – Learning rate.

  • min_lr (float) – Minimum learning rate for cosine decay.

  • lr_warmup_iters (int)

  • precision_config (Optional[Union[MixedPrecisionConfig, str]]) – Precision recipe for the model.

Returns:

Configuration for pre-training.

Return type:

ConfigContainer

.. note::

Sequence length is hardcoded to 65536 (64k) for long sequence training. Default parallelism settings are optimized for handling 64k sequences efficiently.