bridge.recipes.nemotronh.nemotronh#
Module Contents#
Classes#
Typed options accepted by NemotronH recipe helper functions. |
|
Typed options accepted by NemotronH finetuning recipe helper functions. |
Functions#
Return a pre-training config for NemotronH 4B. |
|
Return a pre-training config for NemotronH 8B. |
|
Return a pre-training config for NemotronH 47B. |
|
Return a pre-training config for NemotronH 56B. |
|
Create a pre-training configuration for NemotronH and Nemotron Nano v2 models. |
|
Return a finetuning config for NemotronH 4B. |
|
Return a finetuning config for NemotronH 8B. |
|
Return a finetuning config for NemotronH 47B. |
|
Return a finetuning config for NemotronH 56B. |
|
Common finetuning configuration for NemotronH and Nemotron Nano v2 models. |
Data#
API#
- class bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs#
Bases:
typing_extensions.TypedDictTyped options accepted by NemotronH recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- model_provider: type[megatron.bridge.models.NemotronHModelProvider]#
None
- tokenizer_model: str | None#
None
- dir: str | None#
None
- name: str#
None
- data_paths: list[str] | None#
None
- data_args_path: str | None#
None
- train_data_path: list[str] | None#
None
- valid_data_path: list[str] | None#
None
- test_data_path: list[str] | None#
None
- per_split_data_args_path: str | None#
None
- mock: bool#
None
- tensor_model_parallel_size: int#
None
- pipeline_model_parallel_size: int#
None
- pipeline_dtype: torch.dtype | None#
None
- virtual_pipeline_model_parallel_size: int | None#
None
- context_parallel_size: int#
None
- sequence_parallel: bool#
None
- train_iters: int#
None
- global_batch_size: int#
None
- micro_batch_size: int#
None
- seq_length: int#
None
- lr: float#
None
- min_lr: float#
None
- lr_warmup_iters: int#
None
- lr_decay_iters: int | None#
None
- use_null_tokenizer: bool#
None
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None#
None
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None#
None
- enable_default_comm_overlap: bool#
None
- class bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs#
Bases:
bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargsTyped options accepted by NemotronH finetuning recipe helper functions.
Initialization
Initialize self. See help(type(self)) for accurate signature.
- pretrained_checkpoint: str | None#
None
- peft: str | megatron.bridge.peft.base.PEFT | None#
None
- packed_sequence: bool#
None
- finetune_lr: float#
None
- wandb_project: str | None#
None
- wandb_entity: str | None#
None
- wandb_exp_name: str | None#
None
- bridge.recipes.nemotronh.nemotronh.nemotronh_4b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
Return a pre-training config for NemotronH 4B.
This recipe is designed for single-node training (1 node). Default parallelism: TP=1, PP=1, SP=False.
See
_nemotronh_commonfor the full list of parameters.
- bridge.recipes.nemotronh.nemotronh.nemotronh_8b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
Return a pre-training config for NemotronH 8B.
This recipe is designed for single-node training (1 node). Default parallelism: TP=2, PP=1, SP=True.
See
_nemotronh_commonfor the full list of parameters.
- bridge.recipes.nemotronh.nemotronh.nemotronh_47b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
Return a pre-training config for NemotronH 47B.
This recipe is designed for single-node training (1 node with 8 GPUs). Default parallelism: TP=8, PP=1, SP=True.
Note: Uses FP8 precision by default. Communication overlap is disabled by default due to known issues with FP8 current scaling.
See
_nemotronh_commonfor the full list of parameters.
- bridge.recipes.nemotronh.nemotronh.nemotronh_56b_pretrain_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
Return a pre-training config for NemotronH 56B.
This recipe is designed for single-node training (1 node with 8 GPUs). Default parallelism: TP=8, PP=1, SP=True.
Note: Uses FP8 precision by default. Communication overlap is disabled by default due to known issues with FP8 current scaling.
See
_nemotronh_commonfor the full list of parameters.
- bridge.recipes.nemotronh.nemotronh._nemotronh_common(
- model_provider: type[megatron.bridge.models.NemotronHModelProvider],
- tokenizer_model: str | None = None,
- dir: str | None = None,
- name: str = 'default',
- data_paths: list[str] | None = None,
- data_args_path: str | None = None,
- train_data_path: list[str] | None = None,
- valid_data_path: list[str] | None = None,
- test_data_path: list[str] | None = None,
- per_split_data_args_path: str | None = None,
- mock: bool = False,
- tensor_model_parallel_size: int = 1,
- pipeline_model_parallel_size: int = 1,
- pipeline_dtype: torch.dtype | None = torch.bfloat16,
- virtual_pipeline_model_parallel_size: int | None = None,
- context_parallel_size: int = 1,
- sequence_parallel: bool = False,
- train_iters: int = 1168251,
- global_batch_size: int = 768,
- micro_batch_size: int = 1,
- seq_length: int = 8192,
- lr: float = 0.0003,
- min_lr: float = 3e-05,
- lr_warmup_iters: int = 2000,
- lr_decay_iters: int | None = None,
- use_null_tokenizer: bool = True,
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
- enable_default_comm_overlap: bool = True,
Create a pre-training configuration for NemotronH and Nemotron Nano v2 models.
- Parameters:
model_provider β The model provider class for the specific NemotronH or Nemotron Nano v2 variant.
tokenizer_model β HuggingFace tokenizer model name (only used when use_null_tokenizer=False).
dir β Base directory for saving logs and checkpoints.
name β Name of the pre-training run.
data_paths β List of paths to dataset files. If None, mock data will be used.
data_args_path β Path to file containing data arguments.
train_data_path β List of training data paths.
valid_data_path β List of validation data paths.
test_data_path β List of test data paths.
per_split_data_args_path β Path to JSON file with per-split data configuration.
mock β Whether to use mock data. If True, ignores data_paths.
tensor_model_parallel_size β Degree of tensor model parallelism.
pipeline_model_parallel_size β Degree of pipeline model parallelism.
pipeline_dtype β Data type for pipeline parallelism.
virtual_pipeline_model_parallel_size β Size of virtual pipeline parallelism.
context_parallel_size β Degree of context parallelism to be passed to model_config.
sequence_parallel β Whether to use sequence parallelism.
train_iters β Total number of training iterations.
global_batch_size β Global batch size for training.
micro_batch_size β Micro batch size for training.
seq_length β Sequence length for training data.
lr β Learning rate.
min_lr β Minimum learning rate for cosine decay.
lr_warmup_iters β Number of warmup iterations for the learning rate.
lr_decay_iters β Number of iterations for learning rate decay.
use_null_tokenizer β Whether to use NullTokenizer instead of HuggingFaceTokenizer.
precision_config β Precision configuration for the model.
comm_overlap_config β Communication overlap configuration for the model.
enable_default_comm_overlap β Whether to enable default comm overlap config if none is provided.
- Returns:
Configuration for pre-training.
- Return type:
- bridge.recipes.nemotronh.nemotronh.nemotronh_4b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
Return a finetuning config for NemotronH 4B.
Default configuration:
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=1, PP=1, LR=5e-6
- bridge.recipes.nemotronh.nemotronh.nemotronh_8b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
Return a finetuning config for NemotronH 8B.
Default configuration:
LoRA/DoRA: TP=1, PP=1, LR=1e-4
Full SFT: TP=2, PP=1, LR=5e-6
- bridge.recipes.nemotronh.nemotronh.nemotronh_47b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
Return a finetuning config for NemotronH 47B.
Default configuration:
LoRA/DoRA: TP=4, PP=1, LR=1e-4
Full SFT: TP=8, PP=1, LR=5e-6
Note: Uses FP8 precision by default. Communication overlap is disabled by default.
- bridge.recipes.nemotronh.nemotronh.nemotronh_56b_finetune_config(
- **user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
Return a finetuning config for NemotronH 56B.
Default configuration:
LoRA/DoRA: TP=4, PP=1, LR=1e-4
Full SFT: TP=8, PP=1, LR=5e-6
Note: Uses FP8 precision by default. Communication overlap is disabled by default.
- bridge.recipes.nemotronh.nemotronh._nemotronh_finetune_common(
- model_provider: type[megatron.bridge.models.NemotronHModelProvider],
- tokenizer_model: str | None = None,
- dir: str | None = None,
- name: str = 'default',
- tensor_parallelism: int = 1,
- pipeline_parallelism: int = 1,
- pipeline_parallelism_dtype: torch.dtype | None = torch.bfloat16,
- virtual_pipeline_parallelism: int | None = None,
- context_parallelism: int = 1,
- sequence_parallelism: bool = False,
- pretrained_checkpoint: str | None = None,
- peft: str | megatron.bridge.peft.base.PEFT | None = 'lora',
- packed_sequence: bool = False,
- train_iters: int = 1000,
- global_batch_size: int = 128,
- micro_batch_size: int = 1,
- seq_length: int = 8192,
- eval_interval: int = 50,
- save_interval: int = 50,
- finetune_lr: float = 0.0001,
- min_lr: float = 1e-05,
- lr_warmup_iters: int = 50,
- lr_decay_iters: int | None = None,
- wandb_project: str | None = None,
- wandb_entity: str | None = None,
- wandb_exp_name: str | None = None,
- precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
- comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
- hf_tokenizer_kwargs: dict | None = None,
Common finetuning configuration for NemotronH and Nemotron Nano v2 models.
- Parameters:
model_provider β The model provider class for the specific NemotronH or Nemotron Nano v2 variant.
tokenizer_model β HuggingFace tokenizer model name.
dir β Base directory for saving logs and checkpoints.
name β Name of the finetuning run.
tensor_parallelism β Degree of tensor model parallelism.
pipeline_parallelism β Degree of pipeline model parallelism. Default: 1.
pipeline_parallelism_dtype β Data type for pipeline parallelism. Default: torch.bfloat16.
virtual_pipeline_parallelism β Size of virtual pipeline parallelism.
context_parallelism β Degree of context parallelism. Default: 1.
sequence_parallelism β Whether to use sequence parallelism.
pretrained_checkpoint β Path to pretrained checkpoint to load from.
peft β PEFT configuration (e.g., βloraβ, βdoraβ) or PEFT object. None for full SFT. Default: βloraβ.
packed_sequence β Whether to use packed sequences. Default: False.
train_iters β Total number of training iterations. Default: 1000.
global_batch_size β Global batch size. Default: 128.
micro_batch_size β Micro batch size. Default: 1.
seq_length β Sequence length. Default: 8192.
eval_interval β Evaluation interval in iterations. Default: 50.
save_interval β Checkpoint save interval in iterations. Default: 50.
finetune_lr β Learning rate for finetuning. Default: 1e-4.
min_lr β Minimum learning rate. Default: 1e-5.
lr_warmup_iters β Number of warmup iterations. Default: 50.
lr_decay_iters β Number of LR decay iterations.
wandb_project β Weights & Biases project name.
wandb_entity β Weights & Biases entity name.
wandb_exp_name β Weights & Biases experiment name.
precision_config β Precision configuration.
comm_overlap_config β Communication overlap configuration.
hf_tokenizer_kwargs β Additional kwargs for HuggingFace tokenizer (e.g., {βeos_tokenβ: β<SPECIAL_12>β}).
- Returns:
Configuration for finetuning.
- Return type:
.. note::
4B model: TP=1, SP=False, BF16 mixed precision
8B model: TP=2 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision
9B Nano v2: TP=2 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision
12B Nano v2: TP=4 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision
47B model: TP=8 (full SFT) or TP=4 (LoRA), SP=True (full SFT), FP8 precision
56B model: TP=8 (full SFT) or TP=4 (LoRA), SP=True (full SFT), FP8 precision
Uses SQuAD dataset format for finetuning
- bridge.recipes.nemotronh.nemotronh.__all__#
[βnemotronh_4b_pretrain_configβ, βnemotronh_8b_pretrain_configβ, βnemotronh_47b_pretrain_configβ, βnβ¦