Important
NeMo 2.0 is an experimental feature and currently released in the dev container only: nvcr.io/nvidia/nemo:dev. Please refer to NeMo 2.0 overview for information on getting started.
Training with Predefined Configurations
Run Training
To run Falcon training update conf/config.yaml
:
defaults:
- training: falcon/falcon_7b
stages:
- training
Specify falcon
and the desired model size for training
configuration, falcon/falcon_<model_size>
.
Execute the launcher pipeline: python3 main.py
.
Configuration
Default configurations for model size specific training can be found in the folder conf/training/falcon
.
The configuration is divided into four sections run
, trainer
, exp_manager
, and model
.
run:
name: falcon_7b
results_dir: ${base_results_dir}/${.name}
time_limit: "0-04:00:00"
dependency: "singleton"
Set the number of nodes and devices for training:
trainer:
num_nodes: 16
devices: 8
max_steps: 300000 # consumed_samples = global_step * global_batch_size
max_time: "05:23:30:00" # days:hours:minutes:seconds
Set configurations for creating a checkpoint:
exp_manger:
create_checkpoint_callback: True
checkpoint_callback_params:
monitor: val_loss
save_top_k: 10
mode: min
always_save_nemo: False # saves nemo file during validation, not implemented for model parallel
save_nemo_train_end: False # not recommended when training large models on clusters with short time limits
filename: 'megatron_falcon--{val_loss:.2f}-{step}-{consumed_amples}'
model_parallel_size: ${multiply:${training.model.tensor_model_parallel_size}, ${training.model.pipeline_model_parallel_size}}
Set wandb configurations:
exp_manager:
create_wandb_logger: True
wandb_logger_kwargs:
project: nemo_falcon
name: ${training.run.name}
Set tensor parallel and pipeline parallel size:
model:
tensor_model_parallel_size: 1
pipeline_model_parallel_size: 1
Set data distribution configuration:
model:
data:
data_prefix:
- .0333
- ${data_dir}/my-falcon_00_text_document
- .0333
- ${data_dir}/my-falcon_00_text_document