Important
You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.
Evaluation#
For the Vision Transformer, our evaluation script processes the ImageNet 1K validation folder and computes the final validation accuracy.
To enable the evaluation stage with a ViT model, configure the configuration files:
In the
defaultssection ofconf/config.yaml, update theevaluationfield to point to the desired ViT configuration file. For example, if you want to use thevit/imagenet_valconfiguration, change theevaluationfield tovit/imagenet_val.defaults: - evaluation: vit/imagenet_val ...
In the
stagesfield ofconf/config.yaml, make sure theevaluationstage is included. For example,stages: - evaluation ...
Configure
imagenet_valfield ofconf/evaluation/vit/imagenet_val.yamlto be the ImageNet 1K validation folder.Execute the launcher pipeline:
python3 main.py.
Remarks:
To load a pretrained checkpoint for inference, set the
restore_from_pathfield in themodelsection to the path of the pretrained checkpoint in.nemoformat inconf/evaluation/vit/imagenet_val.yaml. By default, this field links to the.nemoformat checkpoint located in the ImageNet 1K fine-tuning checkpoints folder.We highly recommend users to use the same precision (i.e.,
trainer.precision) for evaluation as was used during training.