bridge.recipes.mamba.mamba2#
Module Contents#
Classes#
Typed options accepted by Mamba2 recipe helper functions. |
Functions#
Return a pre-training config for Mamba2 130M. |
|
Return a pre-training config for Mamba2 370M. |
|
Return a pre-training config for Mamba2 780M. |
|
Return a pre-training config for Mamba2 1.3B. |
|
Return a pre-training config for Mamba2 2.7B. |
|
Return a pre-training config for Mamba2 8B. |
|
Return a pre-training config for Mamba2 Hybrid 8B. |
|
Create a pre-training configuration for Mamba 2.x models. |
Data#
API#
- class bridge.recipes.mamba.mamba2.Mamba2CommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by Mamba2 recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- model_provider: type[megatron.bridge.models.mamba.MambaModelProvider130M] | type[megatron.bridge.models.mamba.MambaModelProvider370M] | type[megatron.bridge.models.mamba.MambaModelProvider780M] | type[megatron.bridge.models.mamba.MambaModelProvider1P3B] | type[megatron.bridge.models.mamba.MambaModelProvider2P7B] | type[megatron.bridge.models.mamba.NVIDIAMambaModelProvider8B] | type[megatron.bridge.models.mamba.NVIDIAMambaHybridProvider8B]#
None
- tokenizer_model: str | None#
None
- dir: str | None#
None
- name: str#
None
- data_paths: list[str] | None#
None
- data_args_path: str | None#
None
- train_data_path: list[str] | None#
None
- valid_data_path: list[str] | None#
None
- test_data_path: list[str] | None#
None
- per_split_data_args_path: str | None#
None
- mock: bool#
None
- tensor_parallelism: int#
None
- pipeline_parallelism: int#
None
- pipeline_parallelism_dtype: torch.dtype | None#
None
- virtual_pipeline_parallelism: int | None#
None
- context_parallelism: int#
None
- sequence_parallelism: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: int | None#
None
- use_null_tokenizer: bool#
None
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None#
None
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None#
None
- bridge.recipes.mamba.mamba2.mamba2_130m_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 130M.
- bridge.recipes.mamba.mamba2.mamba2_370m_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 370M.
- bridge.recipes.mamba.mamba2.mamba2_780m_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 780M.
- bridge.recipes.mamba.mamba2.mamba2_1p3b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 1.3B.
- bridge.recipes.mamba.mamba2.mamba2_2p7b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 2.7B.
- bridge.recipes.mamba.mamba2.mamba2_8b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 8B.
- bridge.recipes.mamba.mamba2.mamba2_hybrid_8b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.mamba.mamba2.Mamba2CommonKwargs],
Return a pre-training config for Mamba2 Hybrid 8B.
- bridge.recipes.mamba.mamba2._mamba2_common(
- model_provider: type[megatron.bridge.models.mamba.MambaModelProvider130M] | type[megatron.bridge.models.mamba.MambaModelProvider370M] | type[megatron.bridge.models.mamba.MambaModelProvider780M] | type[megatron.bridge.models.mamba.MambaModelProvider1P3B] | type[megatron.bridge.models.mamba.MambaModelProvider2P7B] | type[megatron.bridge.models.mamba.NVIDIAMambaModelProvider8B] | type[megatron.bridge.models.mamba.NVIDIAMambaHybridProvider8B],
- tokenizer_model: str | None = None,
- dir: str | None = None,
- name: str = 'default',
- data_paths: list[str] | None = None,
- data_args_path: str | None = None,
- train_data_path: list[str] | None = None,
- valid_data_path: list[str] | None = None,
- test_data_path: list[str] | None = None,
- per_split_data_args_path: str | None = None,
- mock: bool = False,
- tensor_parallelism: int = 1,
- pipeline_parallelism: int = 1,
- pipeline_parallelism_dtype: torch.dtype | None = None,
- virtual_pipeline_parallelism: int | None = None,
- context_parallelism: int = 1,
- sequence_parallelism: bool = False,
- train_iters: int = 1168251,
- global_batch_size: int = 8,
- micro_batch_size: int = 1,
- seq_length: int = 4096,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 2000,
- lr_decay_iters: int | None = None,
- use_null_tokenizer: bool = False,
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
Create a pre-training configuration for Mamba 2.x models.
Args mirror the individual recipe helpers; see those functions for recommended defaults.
- bridge.recipes.mamba.mamba2.__all__#
[‘mamba2_130m_pretrain_config’, ‘mamba2_370m_pretrain_config’, ‘mamba2_780m_pretrain_config’, ‘mamba…