We have many examples to demonstrate how to run PEFT with a variety of models in the forms of playbooks or Nemo launcher commands.
Most of the examples use LoRA since it is the most common PEFT method. However, you can easily switch to other PEFT methods by modifying model.peft.peft_scheme to ptuning, ia3, or adapter.
Playbook: NeMo Framework PEFT
Launcher: Parameter Efficient Fine-Tuning (PEFT)
Playbook: peft-with-mistral
Launcher: Parameter Efficient Fine-Tuning (PEFT)
Playbook: NeMo Framework PEFT
Launcher: Parameter Efficient Fine-Tuning (PEFT)
Playbook and Launcher: Parameter Efficient Fine-Tuning (PEFT)
Playbook and Launcher: Parameter Efficient Fine-Tuning (PEFT)
Launcher: Parameter Efficient Fine-Tuning (PEFT)
Launcher: PEFT Training and Inference