Important
NeMo 2.0 is an experimental feature and currently released in the dev container only: nvcr.io/nvidia/nemo:dev. Please refer to NeMo 2.0 overview for information on getting started.
SFT and PEFT Examples
We offer many examples that show how to run Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) methods across a variety of models presented in the form of playbooks or NeMo Framework Launcher commands.
Most of the PEFT examples use LoRA since it is the most common PEFT method. However, you can easily switch to other
PEFT methods by modifying model.peft.peft_scheme
to ptuning
, ia3
, or adapter
. You can also switch to SFT
by setting model.peft.peft_scheme
to null
(along with other changes such as learning rate, if applicable).
For QLoRA, please refer to the NeMo QLoRA Guide.
Nemotron
Gemma and CodeGemma
Starcoder2
Mistral
Mixtral
Llama
Falcon
Baichuan2
T5
PEFT Launcher Command