Important

You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.

SFT and PEFT Examples#

We offer many examples that show how to run Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) methods across a variety of models presented in the form of playbooks or NeMo Framework Launcher commands.

Most of the PEFT examples use LoRA since it is the most common PEFT method. However, you can easily switch to other PEFT methods by modifying model.peft.peft_scheme to ptuning, ia3, or adapter. You can also switch to SFT by setting model.peft.peft_scheme to null (along with other changes such as learning rate, if applicable).

For QLoRA, please refer to the NeMo QLoRA Guide.

Nemotron#

  • PEFT Playbook

  • PEFT Launcher Command

  • SFT Playbook

Gemma and CodeGemma#

  • PEFT Playbook and Launcher Comamnd

  • SFT Playbook and Launcher Comamnd

Starcoder2#

  • PEFT Playbook and Launcher Comamnd

  • SFT Playbook and Launcher Comamnd

Mistral#

  • PEFT Playbook

  • PEFT Launcher Command

  • SFT Playbook

Mixtral#

  • PEFT Playbook

  • PEFT Launcher Command

  • SFT Playbook

Llama#

  • PEFT Playbook

  • PEFT Launcher Command

  • SFT Playbook

Falcon#

  • PEFT Launcher Command

Baichuan2#

  • PEFT Launcher Command

T5#

  • PEFT Launcher Command

  • SFT Launcher Command