bridge.recipes.nemotronh.nemotron_3_nano#

Module Contents#

Classes#

Nemotron3NanoCommonKwargs

Typed options accepted by Nemotron 3 Nano recipe helper functions.

Nemotron3NanoFinetuneKwargs

Typed options accepted by Nemotron 3 Nano finetune recipe helpers.

Functions#

nemotron_3_nano_pretrain_config

Return a pre-training config for Nemotron 3 Nano.

_nemotron_3_nano_common

Create a pre-training configuration for Nemotron 3 Nano model.

nemotron_3_nano_finetune_config

Return a finetuning config for Nemotron 3 Nano.

_nemotron_3_nano_finetune_common

Common finetuning configuration for Nemotron 3 Nano models.

API#

class bridge.recipes.nemotronh.nemotron_3_nano.Nemotron3NanoCommonKwargs#

Bases: typing_extensions.TypedDict

Typed options accepted by Nemotron 3 Nano recipe helper functions.

Initialization

Initialize self. See help(type(self)) for accurate signature.

model_provider: megatron.bridge.models.nemotronh.Nemotron3NanoProvider#

None

dir: Optional[str]#

None

name: str#

None

data_paths: Optional[list[str]]#

None

data_args_path: Optional[str]#

None

train_data_path: Optional[list[str]]#

None

valid_data_path: Optional[list[str]]#

None

test_data_path: Optional[list[str]]#

None

per_split_data_args_path: Optional[str]#

None

path_to_cache: Optional[str]#

None

mock: bool#

None

tensor_model_parallel_size: int#

None

pipeline_model_parallel_size: int#

None

pipeline_parallelism_dtype: Optional[torch.dtype]#

None

virtual_pipeline_parallelism: Optional[int]#

None

context_parallel_size: int#

None

sequence_parallelism: bool#

None

expert_tensor_parallelism: int#

None

expert_model_parallelism: int#

None

train_iters: int#

None

global_batch_size: int#

None

micro_batch_size: int#

None

seq_length: int#

None

lr: float#

None

min_lr: float#

None

lr_warmup_iters: int#

None

lr_decay_iters: Optional[int]#

None

use_null_tokenizer: bool#

None

precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#

None

comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#

None

enable_deepep: bool#

None

bridge.recipes.nemotronh.nemotron_3_nano.nemotron_3_nano_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotron_3_nano.Nemotron3NanoCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for Nemotron 3 Nano.

This recipe is designed for multi-node training. Default parallelism: TP=4, PP=1, SP=True, with DeepEP enabled.

See _nemotron_3_nano_common for the full list of parameters.

bridge.recipes.nemotronh.nemotron_3_nano._nemotron_3_nano_common(
model_provider: type[megatron.bridge.models.nemotronh.Nemotron3NanoProvider],
dir: Optional[str] = None,
name: str = 'default',
data_paths: Optional[list[str]] = None,
data_args_path: Optional[str] = None,
train_data_path: Optional[list[str]] = None,
valid_data_path: Optional[list[str]] = None,
test_data_path: Optional[list[str]] = None,
per_split_data_args_path: Optional[str] = None,
path_to_cache: Optional[str] = None,
mock: bool = False,
tensor_model_parallel_size: int = 4,
pipeline_model_parallel_size: int = 1,
pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallel_size: int = 1,
sequence_parallelism: bool = True,
expert_tensor_parallelism: int = 1,
expert_model_parallelism: int = 8,
train_iters: int = 39735,
global_batch_size: int = 3072,
micro_batch_size: int = 2,
seq_length: int = 8192,
eval_interval: int = 1000,
save_interval: int = 200,
lr: float = 0.0016,
min_lr: float = 1.6e-05,
lr_warmup_iters: int = 333,
lr_decay_iters: Optional[int] = None,
use_null_tokenizer: bool = False,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
enable_deepep: bool = True,
wandb_project: Optional[str] = None,
wandb_entity: Optional[str] = None,
wandb_exp_name: Optional[str] = None,
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for Nemotron 3 Nano model.

Parameters:
  • model_provider – The model provider class for the Nemotron 3 Nano variant.

  • dir – Base directory for saving logs and checkpoints.

  • name – Name of the pre-training run.

  • data_paths – List of paths to dataset files. If None, mock data will be used.

  • data_args_path – Path to file containing data arguments.

  • train_data_path – List of training data paths.

  • valid_data_path – List of validation data paths.

  • test_data_path – List of test data paths.

  • per_split_data_args_path – Path to JSON file with per-split data configuration.

  • path_to_cache – Path to cache directory.

  • mock – Whether to use mock data. If True, ignores data_paths.

  • tensor_model_parallel_size – Degree of tensor model parallelism.

  • pipeline_model_parallel_size – Degree of pipeline model parallelism.

  • pipeline_parallelism_dtype – Data type for pipeline parallelism.

  • virtual_pipeline_parallelism – Size of virtual pipeline parallelism.

  • context_parallel_size – Degree of context parallelism to be passed to model_config.

  • sequence_parallelism – Whether to use sequence parallelism.

  • expert_tensor_parallelism – Degree of expert tensor parallelism.

  • expert_model_parallelism – Degree of expert model parallelism.

  • train_iters – Total number of training iterations.

  • global_batch_size – Global batch size for training.

  • micro_batch_size – Micro batch size for training.

  • seq_length – Sequence length for training data.

  • eval_interval – Interval (in iterations) between evaluations.

  • save_interval – Interval (in iterations) between checkpoints.

  • lr – Learning rate.

  • min_lr – Minimum learning rate for cosine decay.

  • lr_warmup_iters – Number of warmup iterations for the learning rate.

  • lr_decay_iters – Number of iterations for learning rate decay.

  • use_null_tokenizer – Whether to use NullTokenizer instead of HuggingFaceTokenizer.

  • precision_config – Precision configuration for the model.

  • comm_overlap_config – Communication overlap configuration for the model.

  • enable_deepep – Whether to enable DeepEP for MoE.

  • wandb_project – Weights & Biases project name.

  • wandb_entity – Weights & Biases entity name.

  • wandb_exp_name – Weights & Biases experiment name.

Returns:

Configuration for pre-training.

Return type:

ConfigContainer

class bridge.recipes.nemotronh.nemotron_3_nano.Nemotron3NanoFinetuneKwargs#

Bases: typing_extensions.TypedDict

Typed options accepted by Nemotron 3 Nano finetune recipe helpers.

Initialization

Initialize self. See help(type(self)) for accurate signature.

model_provider: megatron.bridge.models.nemotronh.Nemotron3NanoProvider#

None

dir: Optional[str]#

None

name: str#

None

tensor_model_parallel_size: int#

None

pipeline_model_parallel_size: int#

None

pipeline_parallelism_dtype: Optional[torch.dtype]#

None

virtual_pipeline_parallelism: Optional[int]#

None

context_parallel_size: int#

None

sequence_parallelism: bool#

None

expert_tensor_parallelism: int#

None

expert_model_parallelism: int#

None

pretrained_checkpoint: Optional[str]#

None

peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]]#

None

packed_sequence: bool#

None

train_iters: int#

None

global_batch_size: Optional[int]#

None

micro_batch_size: int#

None

seq_length: int#

None

finetune_lr: float#

None

min_lr: float#

None

lr_warmup_iters: int#

None

lr_decay_iters: Optional[int]#

None

eval_interval: int#

None

save_interval: int#

None

precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#

None

comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#

None

enable_deepep: bool#

None

wandb_project: Optional[str]#

None

wandb_entity: Optional[str]#

None

wandb_exp_name: Optional[str]#

None

bridge.recipes.nemotronh.nemotron_3_nano.nemotron_3_nano_finetune_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.nemotronh.nemotron_3_nano.Nemotron3NanoFinetuneKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a finetuning config for Nemotron 3 Nano.

Default configuration:

  • LoRA/DoRA: TP=1, PP=1, EP=1, LR=1e-4

  • Full SFT: TP=1, PP=1, EP=8, lower LR (5e-6)

bridge.recipes.nemotronh.nemotron_3_nano._nemotron_3_nano_finetune_common(
model_provider: type[megatron.bridge.models.nemotronh.Nemotron3NanoProvider],
dir: Optional[str] = None,
name: str = 'default',
tensor_model_parallel_size: int = 1,
pipeline_model_parallel_size: int = 1,
pipeline_parallelism_dtype: Optional[torch.dtype] = torch.bfloat16,
virtual_pipeline_parallelism: Optional[int] = None,
context_parallel_size: int = 1,
sequence_parallelism: bool = True,
expert_tensor_parallelism: int = 1,
expert_model_parallelism: int = 1,
pretrained_checkpoint: Optional[str] = None,
peft: Optional[Union[str, megatron.bridge.peft.base.PEFT]] = 'lora',
packed_sequence: bool = False,
train_iters: int = 1000,
global_batch_size: int = 128,
micro_batch_size: int = 1,
seq_length: int = 2048,
eval_interval: int = 500,
save_interval: int = 200,
finetune_lr: float = 0.0001,
min_lr: float = 0.0,
lr_warmup_iters: int = 50,
lr_decay_iters: Optional[int] = None,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
enable_deepep: bool = True,
wandb_project: Optional[str] = None,
wandb_entity: Optional[str] = None,
wandb_exp_name: Optional[str] = None,
) megatron.bridge.training.config.ConfigContainer#

Common finetuning configuration for Nemotron 3 Nano models.

Parameters:
  • model_provider – The model provider class for the Nemotron 3 Nano variant.

  • dir – Base directory for saving logs and checkpoints.

  • name – Name of the finetuning run.

  • tensor_model_parallel_size – Degree of tensor model parallelism.

  • pipeline_model_parallel_size – Degree of pipeline model parallelism.

  • pipeline_parallelism_dtype – Data type for pipeline parallelism.

  • virtual_pipeline_parallelism – Size of virtual pipeline parallelism.

  • context_parallel_size – Degree of context parallelism.

  • sequence_parallelism – Whether to use sequence parallelism.

  • expert_tensor_parallelism – Degree of expert tensor parallelism.

  • expert_model_parallelism – Degree of expert model parallelism.

  • pretrained_checkpoint – Path to the pretrained checkpoint.

  • peft – PEFT configuration (e.g., β€œlora”, β€œdora”, β€œnone” or PEFT object).

  • packed_sequence – Whether to use packed sequences.

  • train_iters – Total number of training iterations.

  • global_batch_size – Global batch size for training.

  • micro_batch_size – Micro batch size for training.

  • seq_length – Sequence length for training data.

  • eval_interval – Interval (in iterations) between evaluations.

  • save_interval – Interval (in iterations) between checkpoints.

  • finetune_lr – Learning rate for finetuning.

  • min_lr – Minimum learning rate.

  • lr_warmup_iters – Number of warmup iterations for the learning rate.

  • lr_decay_iters – Number of iterations for learning rate decay.

  • precision_config – Precision configuration for the model.

  • comm_overlap_config – Communication overlap configuration for the model.

  • enable_deepep – Whether to enable DeepEP for MoE.

  • wandb_project – Weights & Biases project name.

  • wandb_entity – Weights & Biases entity name.

  • wandb_exp_name – Weights & Biases experiment name.

Returns:

Configuration for finetuning.

Return type:

ConfigContainer