bridge.recipes.gpt_oss.gpt_oss#

Module Contents#

Classes#

GPTOSSCommonKwargs

Typed options accepted by GPT-OSS recipe helpers.

Functions#

_gpt_oss_common

Create a pre-training configuration for GPT-OSS family models using a given HuggingFace path. Mirrors the structure used in llama recipes for consistency.

gpt_oss_20b_pretrain_config

Return a pre-training config for GPT-OSS 20B variant.

gpt_oss_120b_pretrain_config

Return a pre-training config for GPT-OSS 120B variant.

API#

class bridge.recipes.gpt_oss.gpt_oss.GPTOSSCommonKwargs#

Bases: typing_extensions.TypedDict

Typed options accepted by GPT-OSS recipe helpers.

Initialization

Initialize self. See help(type(self)) for accurate signature.

hf_path: str#

None

dir: Optional[str]#

None

name: str#

None

data_paths: Optional[List[str]]#

None

data_args_path: Optional[str]#

None

train_data_path: Optional[List[str]]#

None

valid_data_path: Optional[List[str]]#

None

test_data_path: Optional[List[str]]#

None

per_split_data_args_path: Optional[str]#

None

mock: bool#

None

dataset: Optional[Union[megatron.bridge.training.config.GPTDatasetConfig, megatron.bridge.training.config.FinetuningDatasetConfig, megatron.bridge.training.config.DatasetProvider]]#

None

num_layers: int#

None

tensor_model_parallel_size: int#

None

pipeline_model_parallel_size: int#

None

pipeline_parallelism_dtype: Optional[torch.dtype]#

None

virtual_pipeline_model_parallel_size: Optional[int]#

None

context_parallel_size: int#

None

expert_model_parallel_size: Optional[int]#

None

sequence_parallelism: bool#

None

use_megatron_fsdp: bool#

None

account_for_embedding_in_pipeline_split: bool#

None

account_for_loss_in_pipeline_split: bool#

None

cp_comm_type: Optional[str]#

None

train_iters: int#

None

global_batch_size: int#

None

micro_batch_size: int#

None

seq_length: int#

None

lr: float#

None

min_lr: float#

None

lr_warmup_iters: int#

None

lr_decay_iters: Optional[int]#

None

eval_interval: int#

None

save_interval: int#

None

use_null_tokenizer: bool#

None

precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]]#

None

comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig]#

None

pretrained_checkpoint: Optional[str]#

None

bridge.recipes.gpt_oss.gpt_oss._gpt_oss_common(
hf_path: str,
dir: Optional[str] = None,
name: str = 'default',
data_paths: Optional[List[str]] = None,
data_args_path: Optional[str] = None,
train_data_path: Optional[List[str]] = None,
valid_data_path: Optional[List[str]] = None,
test_data_path: Optional[List[str]] = None,
per_split_data_args_path: Optional[str] = None,
mock: bool = False,
dataset: Optional[Union[megatron.bridge.training.config.GPTDatasetConfig, megatron.bridge.training.config.FinetuningDatasetConfig, megatron.bridge.training.config.DatasetProvider]] = None,
num_layers: int = None,
tensor_model_parallel_size: int = 1,
pipeline_model_parallel_size: int = 1,
pipeline_parallelism_dtype: Optional[torch.dtype] = None,
virtual_pipeline_model_parallel_size: Optional[int] = None,
context_parallel_size: int = 1,
expert_model_parallel_size: int = 1,
sequence_parallelism: bool = False,
use_megatron_fsdp: bool = False,
account_for_embedding_in_pipeline_split: bool = False,
account_for_loss_in_pipeline_split: bool = False,
cp_comm_type: Optional[str] = None,
train_iters: int = 1000000,
global_batch_size: int = 512,
micro_batch_size: int = 1,
seq_length: int = 4096,
lr: float = 0.0003,
min_lr: float = 3e-05,
lr_warmup_iters: int = 2000,
lr_decay_iters: Optional[int] = None,
eval_interval: int = 2000,
save_interval: int = 500,
use_null_tokenizer: bool = True,
precision_config: Optional[Union[megatron.bridge.training.mixed_precision.MixedPrecisionConfig, str]] = 'bf16_mixed',
comm_overlap_config: Optional[megatron.bridge.training.comm_overlap.CommOverlapConfig] = None,
pretrained_checkpoint: Optional[str] = None,
) megatron.bridge.training.config.ConfigContainer#

Create a pre-training configuration for GPT-OSS family models using a given HuggingFace path. Mirrors the structure used in llama recipes for consistency.

bridge.recipes.gpt_oss.gpt_oss.gpt_oss_20b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.gpt_oss.gpt_oss.GPTOSSCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for GPT-OSS 20B variant.

bridge.recipes.gpt_oss.gpt_oss.gpt_oss_120b_pretrain_config(
**user_kwargs: typing_extensions.Unpack[bridge.recipes.gpt_oss.gpt_oss.GPTOSSCommonKwargs],
) megatron.bridge.training.config.ConfigContainer#

Return a pre-training config for GPT-OSS 120B variant.