Parameter Efficient Fine-Tuning (PEFT)

User Guide (Latest Version)

We have curated 3 configurations with suggested hyperparameters specifically for the NVIDIA DGX SuperPOD, which is equipped with 8 NVIDIA A100 80GB GPUs. The configurations for the curated models can be found in the conf/peft/neva directory. You can access and modify the parameters to adjust the hyperparameters for your specific training runs. By customizing these settings, you can tailor the model’s performance and training efficiency to better suit your needs and requirements.

Language Model

Vision Encoder

Multimodal Connector Type

PEFT Scheme

Tensor Model Parallel Size

Pipeline Model Parallel Size

Batch size per GPU

Accumulated Global Batch Size


AMP Level

Total Training Samples Seen

LLaMA-2-7B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) LORA 4 1 4 128 BF16 O2 150K
LLaMA-2-13B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) LORA 8 1 4 32 BF16 O2 150K
LLaMA-2-70B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) LORA 8 1 1 128 BF16 O2 150K

To enable the fine-tuning stage with a NeVA model, configure the configuration files:

  1. In the defaults section of conf/config.yaml, update the peft field to point to the desired NeVA configuration file. For example, if you want to fine-tune a pretrained NeVA model based on LLaMA-2-7B-Chat (i.e. llama2_7b_chat) configuration, change the peft field to neva/llama2_7b_chat.


    defaults: - peft: neva/llama2_7b_chat ...

  2. In the stages field of conf/config.yaml, make sure the peft stage is included. For example,


    stages: - peft ...

  3. Execute launcher pipeline: python3


  1. Prior to initiating your peft, ensure you’ve readied all necessary datasets and checkpoints.

  2. To load a pretrained checkpoint for PEFT, set the restore_from_path field in the model section to the path of the pretrained checkpoint in .nemo format. By default, this field links to the .nemo format checkpoint located in the training checkpoints folder.

  3. PEFT-tuned checkpoints will save only the LoRA weights instead of the entire model. For subsequent inference and evaluation, both sets of weights will be required.

  4. If you are training using the Vicuna v1.5 language model checkpoints, you can utilize the same model size configuration as in Llama2 Chat, since they are structurally identical. For instance, when using the Vicuna v1.5 7B model, you can simply opt for the llama2_7b_chat configuration. You only need to set the following: peft.model.mm_cfg.llm.model_type=v1 and

Previous Fine-tuning
Next Framework Inference
© | | | | | | |. Last updated on Jun 19, 2024.