nat.plugins.eval.exporters.file_eval_callback#
File-based eval callback that writes evaluation output to local files.
Attributes#
Classes#
Eval callback that persists evaluation artifacts to the local filesystem. |
Module Contents#
- logger#
- class FileEvalCallback#
Eval callback that persists evaluation artifacts to the local filesystem.
This replaces the direct file I/O previously embedded in
EvaluationRun, making file output opt-in and enabling eval as a clean Python API.- workflow_output_file: pathlib.Path | None = None#
- atif_workflow_output_file: pathlib.Path | None = None#
- evaluator_output_files: list[pathlib.Path] = []#
- config_original_file: pathlib.Path | None = None#
- config_effective_file: pathlib.Path | None = None#
- config_metadata_file: pathlib.Path | None = None#
- on_eval_complete(result: nat.eval.eval_callbacks.EvalResult) None#
Write evaluation artifacts to
result.output_dir.
- _write_configuration(
- result: nat.eval.eval_callbacks.EvalResult,
- output_dir: pathlib.Path,
Save original config, effective config, and run metadata.
- static _build_run_metadata(run_config: Any) dict[str, Any]#
Assemble the metadata dict from an
EvaluationRunConfig.
- _write_workflow_output(
- result: nat.eval.eval_callbacks.EvalResult,
- output_dir: pathlib.Path,
Write the serialized workflow output JSON.
- _write_evaluator_outputs(
- result: nat.eval.eval_callbacks.EvalResult,
- output_dir: pathlib.Path,
Write per-evaluator result files.