Training with Predefined Configurations

NVIDIA provides 3 configurations with suggested hyperparameters specifically for the NVIDIA DGX SuperPOD, which is equipped with 8 NVIDIA A100 80GB GPUs. The configurations for the curated models can be found in the conf/training/neva directory. You can access and modify the parameters to adjust the hyperparameters for your specific training runs. By customizing these settings, you can tailor the model’s performance and training efficiency to better suit your needs and requirements.

Language Model

Vision Encoder

Multimodal Connector Type

Tensor Model Parallel Size

Pipeline Model Parallel Size

Batch size per GPU

Accumulated Global Batch Size

Precision

AMP Level

Total Training Samples Seen

LLaMA-2-7B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 4 1 32 256 BF16 O2 550K
LLaMA-2-13B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 8 1 32 256 BF16 O2 550K
LLaMA-2-70B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 8 1 8 256 BF16 O2 550K

To enable the training stage using a NeVA model, follow these configuration steps:

  1. Navigate to the defaults section in conf/config.yaml. Update the training field to reference the desired ViT configuration file. For instance, if you wish to utilize the LLaMA-2-7B-Chat (i.e., llama2_7b_chat) configuration, modify the training field to neva/llama2_7b_chat.

    Copy
    Copied!
                

    defaults: - _self_ - cluster: bcm - data_preparation: null - training: neva/llama2_7b_chat ...

  2. Within the stages field of conf/config.yaml, ensure the training stage is listed.

    Copy
    Copied!
                

    stages: - training ...

  3. Execute launcher pipeline: python3 main.py

Remarks:

  1. Prior to initiating your training, ensure you’ve readied all necessary datasets and checkpoints.

  2. Before starting the training, set the correct path for the dataset and checkpoints in neva/llama2_{model_size}_chat.yaml.

  3. If you are training using the Vicuna v1.5 language model checkpoints, you can utilize the same model size configuration as in Llama2 Chat, since they are structurally identical. For instance, when using the Vicuna v1.5 7B model, you can simply opt for the llama2_7b_chat configuration. You only need to set the following: training.model.mm_cfg.llm.model_type=v1

Previous Data Preparation
Next Fine-tuning
© Copyright 2023-2024, NVIDIA. Last updated on Feb 22, 2024.