About Evaluating#
NVIDIA NeMo Evaluator supports evaluation of LLMs through academic benchmarks, custom automated evaluations, and LLM-as-a-Judge. Beyond LLM evaluation, NeMo Evaluator also supports evaluation of Retriever and RAG pipelines.
Typical NeMo Evaluator Workflow#
A typical NeMo Evaluator workflow looks like the following:
Note
NeMo Evaluator depends on NVIDIA NIM for LLMs and NeMo Data Store.
(Optional) If you are using a custom dataset for evaluation, upload it to NeMo Data Store before you run an evaluation.
Create an evaluation target in NeMo Evaluator.
Create an evaluation configuration in NeMo Evaluator.
Run an evaluation job by submitting a request to NeMo Evaluator.
NeMo Evaluator downloads custom data, if any, from NeMo Data Store.
NeMo Evaluator runs inference with NIM for LLMs, Embeddings, and Reranking, depending on the model being evaluated.
NeMo Evaluator writes the results, including generations, logs, and metrics to NeMo Data Store.
NeMo Evaluator returns the results.
Get your results.
For more information, see Run and Manage Evaluation Jobs.
Task Guides#
The following guides provide detailed information on how to perform common Nemo Evaluator tasks.
Create targets for evaluations.
Create configurations for evaluations.
Create and run evaluation jobs.
Get the results of your evaluation jobs.
Tutorials#
The following tutorials provide step-by-step instructions to complete specific evaluation goals.
Learn how to run an evaluation.
Learn how to evaluate a fine-tuned model.
Reference#
The following documentation provides detailed information about the Evaluator API.
View the NeMo Evaluator API reference.
Troubleshoot issues that arise when you work with NeMo Evaluator.