bridge.recipes.nemotronh.nemotronh#

Module Contents#

Classes#

NemotronHCommonKwargs

Typed options accepted by NemotronH recipe helper functions.

NemotronHFinetuneKwargs

Typed options accepted by NemotronH finetuning recipe helper functions.

Functions#

nemotronh_4b_pretrain_config

Return a pre-training config for NemotronH 4B.

nemotronh_8b_pretrain_config

Return a pre-training config for NemotronH 8B.

nemotronh_47b_pretrain_config

Return a pre-training config for NemotronH 47B.

nemotronh_56b_pretrain_config

Return a pre-training config for NemotronH 56B.

_nemotronh_common

Create a pre-training configuration for NemotronH and Nemotron Nano v2 models.

nemotronh_4b_finetune_config

Return a finetuning config for NemotronH 4B.

nemotronh_8b_finetune_config

Return a finetuning config for NemotronH 8B.

nemotronh_47b_finetune_config

Return a finetuning config for NemotronH 47B.

nemotronh_56b_finetune_config

Return a finetuning config for NemotronH 56B.

_nemotronh_finetune_common

Common finetuning configuration for NemotronH and Nemotron Nano v2 models.

Data#

API#

class bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs#

Bases: typing_extensions.TypedDict

Typed options accepted by NemotronH recipe helper functions.

Initialization

Initialize self. See help(type(self)) for accurate signature.

model_provider: type[megatron.bridge.models.NemotronHModelProvider]#

None

tokenizer_model: str | None#

None

dir: str | None#

None

name: str#

None

data_paths: list[str] | None#

None

data_args_path: str | None#

None

train_data_path: list[str] | None#

None

valid_data_path: list[str] | None#

None

test_data_path: list[str] | None#

None

per_split_data_args_path: str | None#

None

mock: bool#

None

tensor_model_parallel_size: int#

None

pipeline_model_parallel_size: int#

None

pipeline_dtype: torch.dtype | None#

None

virtual_pipeline_model_parallel_size: int | None#

None

context_parallel_size: int#

None

sequence_parallel: bool#

None

train_iters: int#

None

global_batch_size: int#

None

micro_batch_size: int#

None

seq_length: int#

None

lr: float#

None

min_lr: float#

None

lr_warmup_iters: int#

None

lr_decay_iters: int | None#

None

use_null_tokenizer: bool#

None

precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None#

None

comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None#

None

enable_default_comm_overlap: bool#

None

class bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs#

Bases: bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs

Typed options accepted by NemotronH finetuning recipe helper functions.

Initialization

Initialize self. See help(type(self)) for accurate signature.

pretrained_checkpoint: str | None#

None

peft: str | megatron.bridge.peft.base.PEFT | None#

None

packed_sequence: bool#

None

finetune_lr: float#

None

wandb_project: str | None#

None

wandb_entity: str | None#

None

wandb_exp_name: str | None#

None

bridge.recipes.nemotronh.nemotronh.nemotronh_4b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for NemotronH 4B.

This recipe is designed for single-node training (1 node). Default parallelism: TP=1, PP=1, SP=False.

See _nemotronh_common for the full list of parameters.

bridge.recipes.nemotronh.nemotronh.nemotronh_8b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for NemotronH 8B.

This recipe is designed for single-node training (1 node). Default parallelism: TP=2, PP=1, SP=True.

See _nemotronh_common for the full list of parameters.

bridge.recipes.nemotronh.nemotronh.nemotronh_47b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for NemotronH 47B.

This recipe is designed for single-node training (1 node with 8 GPUs). Default parallelism: TP=8, PP=1, SP=True.

Note: Uses FP8 precision by default. Communication overlap is disabled by default due to known issues with FP8 current scaling.

See _nemotronh_common for the full list of parameters.

bridge.recipes.nemotronh.nemotronh.nemotronh_56b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for NemotronH 56B.

This recipe is designed for single-node training (1 node with 8 GPUs). Default parallelism: TP=8, PP=1, SP=True.

Note: Uses FP8 precision by default. Communication overlap is disabled by default due to known issues with FP8 current scaling.

See _nemotronh_common for the full list of parameters.

bridge.recipes.nemotronh.nemotronh._nemotronh_common(
model_provider: type[megatron.bridge.models.NemotronHModelProvider],
tokenizer_model: str | None = None,
dir: str | None = None,
name: str = 'default',
data_paths: list[str] | None = None,
data_args_path: str | None = None,
train_data_path: list[str] | None = None,
valid_data_path: list[str] | None = None,
test_data_path: list[str] | None = None,
per_split_data_args_path: str | None = None,
mock: bool = False,
tensor_model_parallel_size: int = 1,
pipeline_model_parallel_size: int = 1,
pipeline_dtype: torch.dtype | None = torch.bfloat16,
virtual_pipeline_model_parallel_size: int | None = None,
context_parallel_size: int = 1,
sequence_parallel: bool = False,
train_iters: int = 1168251,
global_batch_size: int = 768,
micro_batch_size: int = 1,
seq_length: int = 8192,
lr: float = 0.0003,
min_lr: float = 3e-05,
lr_warmup_iters: int = 2000,
lr_decay_iters: int | None = None,
use_null_tokenizer: bool = True,
precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
enable_default_comm_overlap: bool = True,
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for NemotronH and Nemotron Nano v2 models.

Parameters:
  • model_provider – The model provider class for the specific NemotronH or Nemotron Nano v2 variant.

  • tokenizer_model – HuggingFace tokenizer model name (only used when use_null_tokenizer=False).

  • dir – Base directory for saving logs and checkpoints.

  • name – Name of the pre-training run.

  • data_paths – List of paths to dataset files. If None, mock data will be used.

  • data_args_path – Path to file containing data arguments.

  • train_data_path – List of training data paths.

  • valid_data_path – List of validation data paths.

  • test_data_path – List of test data paths.

  • per_split_data_args_path – Path to JSON file with per-split data configuration.

  • mock – Whether to use mock data. If True, ignores data_paths.

  • tensor_model_parallel_size – Degree of tensor model parallelism.

  • pipeline_model_parallel_size – Degree of pipeline model parallelism.

  • pipeline_dtype – Data type for pipeline parallelism.

  • virtual_pipeline_model_parallel_size – Size of virtual pipeline parallelism.

  • context_parallel_size – Degree of context parallelism to be passed to model_config.

  • sequence_parallel – Whether to use sequence parallelism.

  • train_iters – Total number of training iterations.

  • global_batch_size – Global batch size for training.

  • micro_batch_size – Micro batch size for training.

  • seq_length – Sequence length for training data.

  • lr – Learning rate.

  • min_lr – Minimum learning rate for cosine decay.

  • lr_warmup_iters – Number of warmup iterations for the learning rate.

  • lr_decay_iters – Number of iterations for learning rate decay.

  • use_null_tokenizer – Whether to use NullTokenizer instead of HuggingFaceTokenizer.

  • precision_config – Precision configuration for the model.

  • comm_overlap_config – Communication overlap configuration for the model.

  • enable_default_comm_overlap – Whether to enable default comm overlap config if none is provided.

Returns:

Configuration for pre-training.

Return type:

ConfigContainer

bridge.recipes.nemotronh.nemotronh.nemotronh_4b_finetune_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a finetuning config for NemotronH 4B.

Default configuration:

  • LoRA/DoRA: TP=1, PP=1, LR=1e-4

  • Full SFT: TP=1, PP=1, LR=5e-6

bridge.recipes.nemotronh.nemotronh.nemotronh_8b_finetune_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a finetuning config for NemotronH 8B.

Default configuration:

  • LoRA/DoRA: TP=1, PP=1, LR=1e-4

  • Full SFT: TP=2, PP=1, LR=5e-6

bridge.recipes.nemotronh.nemotronh.nemotronh_47b_finetune_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a finetuning config for NemotronH 47B.

Default configuration:

  • LoRA/DoRA: TP=4, PP=1, LR=1e-4

  • Full SFT: TP=8, PP=1, LR=5e-6

Note: Uses FP8 precision by default. Communication overlap is disabled by default.

bridge.recipes.nemotronh.nemotronh.nemotronh_56b_finetune_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotronh.NemotronHFinetuneKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a finetuning config for NemotronH 56B.

Default configuration:

  • LoRA/DoRA: TP=4, PP=1, LR=1e-4

  • Full SFT: TP=8, PP=1, LR=5e-6

Note: Uses FP8 precision by default. Communication overlap is disabled by default.

bridge.recipes.nemotronh.nemotronh._nemotronh_finetune_common(
model_provider: type[megatron.bridge.models.NemotronHModelProvider],
tokenizer_model: str | None = None,
dir: str | None = None,
name: str = 'default',
tensor_parallelism: int = 1,
pipeline_parallelism: int = 1,
pipeline_parallelism_dtype: torch.dtype | None = torch.bfloat16,
virtual_pipeline_parallelism: int | None = None,
context_parallelism: int = 1,
sequence_parallelism: bool = False,
pretrained_checkpoint: str | None = None,
peft: str | megatron.bridge.peft.base.PEFT | None = 'lora',
packed_sequence: bool = False,
train_iters: int = 1000,
global_batch_size: int = 128,
micro_batch_size: int = 1,
seq_length: int = 8192,
eval_interval: int = 50,
save_interval: int = 50,
finetune_lr: float = 0.0001,
min_lr: float = 1e-05,
lr_warmup_iters: int = 50,
lr_decay_iters: int | None = None,
wandb_project: str | None = None,
wandb_entity: str | None = None,
wandb_exp_name: str | None = None,
precision_config: megatron.bridge.training.mixed_precision.MixedPrecisionConfig | str | None = 'bf16_mixed',
comm_overlap_config: megatron.bridge.training.comm_overlap.CommOverlapConfig | None = None,
hf_tokenizer_kwargs: dict | None = None,
) megatron.bridge.training.config.ConfigContainer#

Common finetuning configuration for NemotronH and Nemotron Nano v2 models.

Parameters:
  • model_provider – The model provider class for the specific NemotronH or Nemotron Nano v2 variant.

  • tokenizer_model – HuggingFace tokenizer model name.

  • dir – Base directory for saving logs and checkpoints.

  • name – Name of the finetuning run.

  • tensor_parallelism – Degree of tensor model parallelism.

  • pipeline_parallelism – Degree of pipeline model parallelism. Default: 1.

  • pipeline_parallelism_dtype – Data type for pipeline parallelism. Default: torch.bfloat16.

  • virtual_pipeline_parallelism – Size of virtual pipeline parallelism.

  • context_parallelism – Degree of context parallelism. Default: 1.

  • sequence_parallelism – Whether to use sequence parallelism.

  • pretrained_checkpoint – Path to pretrained checkpoint to load from.

  • peft – PEFT configuration (e.g., β€œlora”, β€œdora”) or PEFT object. None for full SFT. Default: β€œlora”.

  • packed_sequence – Whether to use packed sequences. Default: False.

  • train_iters – Total number of training iterations. Default: 1000.

  • global_batch_size – Global batch size. Default: 128.

  • micro_batch_size – Micro batch size. Default: 1.

  • seq_length – Sequence length. Default: 8192.

  • eval_interval – Evaluation interval in iterations. Default: 50.

  • save_interval – Checkpoint save interval in iterations. Default: 50.

  • finetune_lr – Learning rate for finetuning. Default: 1e-4.

  • min_lr – Minimum learning rate. Default: 1e-5.

  • lr_warmup_iters – Number of warmup iterations. Default: 50.

  • lr_decay_iters – Number of LR decay iterations.

  • wandb_project – Weights & Biases project name.

  • wandb_entity – Weights & Biases entity name.

  • wandb_exp_name – Weights & Biases experiment name.

  • precision_config – Precision configuration.

  • comm_overlap_config – Communication overlap configuration.

  • hf_tokenizer_kwargs – Additional kwargs for HuggingFace tokenizer (e.g., {β€œeos_token”: β€œ<SPECIAL_12>”}).

Returns:

Configuration for finetuning.

Return type:

ConfigContainer

.. note::

  • 4B model: TP=1, SP=False, BF16 mixed precision

  • 8B model: TP=2 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision

  • 9B Nano v2: TP=2 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision

  • 12B Nano v2: TP=4 (full SFT) or TP=1 (LoRA), SP=True (full SFT), BF16 mixed precision

  • 47B model: TP=8 (full SFT) or TP=4 (LoRA), SP=True (full SFT), FP8 precision

  • 56B model: TP=8 (full SFT) or TP=4 (LoRA), SP=True (full SFT), FP8 precision

  • Uses SQuAD dataset format for finetuning

bridge.recipes.nemotronh.nemotronh.__all__#

[β€˜nemotronh_4b_pretrain_config’, β€˜nemotronh_8b_pretrain_config’, β€˜nemotronh_47b_pretrain_config’, β€˜n…