Important
You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.
Evaluation#
For the Vision Transformer, our evaluation script processes the ImageNet 1K validation folder and computes the final validation accuracy.
To enable the evaluation stage with a ViT model, configure the configuration files:
In the
defaults
section ofconf/config.yaml
, update theevaluation
field to point to the desired ViT configuration file. For example, if you want to use thevit/imagenet_val
configuration, change theevaluation
field tovit/imagenet_val
.defaults: - evaluation: vit/imagenet_val ...
In the
stages
field ofconf/config.yaml
, make sure theevaluation
stage is included. For example,stages: - evaluation ...
Configure
imagenet_val
field ofconf/evaluation/vit/imagenet_val.yaml
to be the ImageNet 1K validation folder.Execute the launcher pipeline:
python3 main.py
.
Remarks:
To load a pretrained checkpoint for inference, set the
restore_from_path
field in themodel
section to the path of the pretrained checkpoint in.nemo
format inconf/evaluation/vit/imagenet_val.yaml
. By default, this field links to the.nemo
format checkpoint located in the ImageNet 1K fine-tuning checkpoints folder.We highly recommend users to use the same precision (i.e.,
trainer.precision
) for evaluation as was used during training.