Checkpoint Conversion

NVIDIA provides a simple tool to convert the checkpoints from .ckpt format to .nemo format. The .nemo checkpoint will be used for evaluation and inference.

To run checkpoint conversion update conf/config.yaml:

Copy
Copied!
            

defaults: - conversion: mixtral/convert_mixtral stages: - conversion

Execute launcher pipeline: python3 main.py

Configuration

Default configurations for conversion can be found in file conf/conversion/mixtral/convert_mixtral.yaml.

Copy
Copied!
            

run: name: convert_${conversion.run.model_train_name} nodes: ${divide_ceil:${conversion.model.model_parallel_size}, 8} # 8 gpus per node time_limit: "2:00:00" ntasks_per_node: ${divide_ceil:${conversion.model.model_parallel_size}, ${.nodes}} convert_name: convert_nemo model_train_name: Mixtral-8x7b train_dir: ${base_results_dir}/${.model_train_name} results_dir: ${.train_dir}/${.convert_name} output_path: ${.train_dir}/${.convert_name} nemo_file_name: megatron_Mixtral.nemo # name of nemo checkpoint; must be .nemo file

nemo_file_name sets the output filename of the converted .nemo checkpoint.

output_path sets the output location of the converted .nemo checkpoint.

Copy
Copied!
            

model: model_type: gpt checkpoint_folder: ${conversion.run.train_dir}/results/checkpoints checkpoint_name: latest # latest OR name pattern of a checkpoint (e.g. megatron_gpt-*last.ckpt) hparams_file: ${conversion.run.train_dir}/results/hparams.yaml tensor_model_parallel_size: 8 pipeline_model_parallel_size: 1 model_parallel_size: ${multiply:${.tensor_model_parallel_size}, ${.pipeline_model_parallel_size}} tokenizer_model: ${data_dir}/mixtral/mixtral_tokenizer.model

checkpoint_folder sets the input checkpoint folder to be used for conversion.

checkpoint_name sets the input checkpoint filename to bed used for conversion.

Previous Training with Predefined Configurations
Next Model Evaluation
© Copyright 2023-2024, NVIDIA. Last updated on Apr 25, 2024.