Important

NeMo 2.0 is an experimental feature and currently released in the dev container only: nvcr.io/nvidia/nemo:dev. Please refer to NeMo 2.0 overview for information on getting started.

Model Evaluation

NVIDIA provides a simple tool to help evaluate trained checkpoints. You can evaluate the capabilities of the Falcon model on the following ZeroShot downstream evaluation tasks:

  • lambada, boolq, race, piqa, hellaswag, winogrande, wikitext2, wikitext103

Fine-tuned Falcon models can be evaluated on the following tasks:

  • squad

Run Evaluation

To run evaluation update conf/config.yaml:

defaults:
  - evaluation: falcon/evaluate_all.yaml

stages:
  - evaluation

Execute the launcher pipeline: python3 main.py.

Configuration

Default configurations for evaluation can be found in conf/evaluation/falcon/evaluate_all.yaml

run:
    name: ${.eval_name}_${.model_train_name}
    time_limit: "4:00:00"
    nodes: ${divide_ceil:${evaluation.model.model_parallel_size}, 8} # 8 gpus per node
    ntasks_per_node: ${divide_ceil:${evaluation.model.model_parallel_size}, ${.nodes}}
    eval_name: eval_all
    model_train_name: falcon_7b
    train_dir: ${base_results_dir}/${.model_train_name}
    tasks: all_tasks
    results_dir: ${base_results_dir}/${.model_train_name}/${.eval_name}

tasks sets the evaluation task to execute. Supported tasks include: lambada, boolq, race, piqa, hellaswag, winogrande, wikitext2, wikitext103, all_tasks. all_tasks executes all supported evaluation tasks.

model:
    model_type: nemo-falcon
    nemo_model: null # specify path to .nemo file, produced when converted interleaved checkpoints
    tensor_model_parallel_size: 1
    pipeline_model_parallel_size: 1
    model_parallel_size: ${multiply:${.tensor_model_parallel_size}, ${.pipeline_model_parallel_size}}
    precision: bf16 # must match training precision - 32, 16 or bf16
    eval_batch_size: 4

nemo_model sets the path to .nemo checkpoint to run evaluation.

Run Evaluation on PEFT Falcon models

To run evaluation on PEFT Falcon models update conf/config.yaml:

defaults:
  - evaluation: peft_falcon/squad.yaml

stages:
  - evaluation

Execute the launcher pipeline: python3 main.py.

Configuration

Default configurations for PEFT Falcon evaluation can be found in conf/evaluation/peft_falcon/squad.yaml

run:
  name: eval_${.task_name}_${.model_train_name}
  time_limit: "01:00:00"
  dependency: "singleton"
  convert_name: convert_nemo
  model_train_name: falcon_7b
  task_name: "squad"  # SQuAD v1.1
  convert_dir: ${base_results_dir}/${.model_train_name}/${.convert_name}
  fine_tuning_dir: ${base_results_dir}/${.model_train_name}/peft_${.task_name}
  results_dir: ${base_results_dir}/${.model_train_name}/peft_${.task_name}_eval

Set PEFT specific configurations:

peft:
  peft_scheme: "lora"  # can be either lora or ptuning
  restore_from_path: ${evaluation.run.fine_tuning_dir}/${.peft_scheme}/megatron_falcon_peft_tuning-${.peft_scheme}/checkpoints/megatron_falcon_peft_tuning-{.peft_scheme}.nemo

peft_scheme sets the scheme used during fine-tuning.

restore_from_path sets the path to PEFT checkpoint to run evaluation on.