Training with Predefined Configurations

User Guide (Latest Version)

NVIDIA offers three different configurations, each accompanied by recommended hyperparameters, specifically designed for the NVIDIA DGX SuperPOD. This infrastructure is equipped with eight NVIDIA A100 80GB GPUs. The configuration details for the curated models can be found in the conf/training/video_neva directory.

You can access, modify, and fine-tune the hyperparameters for your specific training runs. By customizing these settings, you can optimize the model’s performance and training efficiency to better align with your requirements.

Language Model

Vision Encoder

Multimodal Connector Type

Tensor Model Parallel Size

Pipeline Model Parallel Size

Batch size per GPU

Accumulated Global Batch Size

Precision

AMP Level

Total Training Samples Seen

LLaMA-2-7B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 4 1 8 256 BF16 O2 550K
LLaMA-2-13B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 8 1 8 256 BF16 O2 550K
LLaMA-2-70B-Chat (frozen) CLIP-L-336px (frozen) MLP Layers (trainable) 8 1 2 256 BF16 O2 550K

To enable the training stage using a VideoNeVA model, follow these configuration steps.

  1. Navigate to the defaults section in conf/config.yaml.

  2. Update the training field to reference the desired ViT configuration file. For instance, if you wish to utilize the LLaMA-2-7B-Chat (i.e., llama2_7b_chat) configuration, modify the training field to video_neva/llama2_7b_chat.

    Copy
    Copied!
                

    defaults: - _self_ - cluster: bcm - data_preparation: null - training: video_neva/llama2_7b_chat ...

  3. Within the stages field of conf/config.yaml, ensure the training stage is listed.

    Copy
    Copied!
                

    stages: - training ...

  4. Execute launcher pipeline: python3 main.py

Remarks:

  1. Prior to initiating your training, ensure you’ve readied all necessary datasets and checkpoints.

  2. Before starting the training, set the correct path for the dataset and checkpoints in video_neva/llama2_{model_size}_chat.yaml.

  3. If you’re training with the Vicuna v1.5 language model checkpoints, you can adopt the same model size configuration as used in Llama2 Chat, as they share a similar structure. For example, when working with the Vicuna v1.5 7B model, you can conveniently choose the llama2_7b_chat configuration. The only adjustment needed is to set the following parameter: training.model.mm_cfg.llm.model_type=v1.

Previous Data Preparation
Next Framework Inference
© | | | | | | |. Last updated on Jun 19, 2024.