PEFT Training and Inference

User Guide (Latest Version)

Below is an example of how to use the training scripts for adapter tuning. The TRAIN_FILEs (and VALIDATION_FILEs) follow the same format as SFT:

Copy
Copied!
            

python /opt/NeMo/examples/nlp/language_modeling/tuning/megatron_t5_finetuning.py \ model.language_model_path=<BASE_T5_MODEL> \ model.data.train_ds=[<TRAIN_FILE1>,<TRAIN_FILE2>,...] \ model.data.validation_ds=[<VALIDATION_FILE1>, <VALIDATION_FILE2>,...]

At the end of tuning, a ‘.nemo’ model is generated which contains the parameters for the PEFT model. Similarly, the PEFT framework has an inference script as well:

Copy
Copied!
            

python /opt/NeMo/examples/nlp/language_modeling/tuning/megatron_t5_generate.py \ data.test_ds=[<TEST_FILE>] \ language_model_path=[BASE_T5_MODEL] \ adapter_model_file=[PEFT_MODEL] \ pred_file_path=<OUTPUT_FILE>

Previous Model Evaluation
Next Model Fine-Tuning
© | | | | | | |. Last updated on Jun 24, 2024.