nat.plugins.eval.runtime_evaluator.register#
Classes#
Mean difference between connected LLM_START and LLM_END events (same UUID). |
|
Average workflow runtime per item (max timestamp - min timestamp). |
|
Average number of LLM calls per item (count of LLM_END). |
|
Average total tokens per LLM_END event (prompt + completion if available). |
Functions#
|
|
|
|
|
|
|
Module Contents#
- class AverageLLMLatencyConfig#
Bases:
nat.data_models.evaluator.EvaluatorBaseConfigMean difference between connected LLM_START and LLM_END events (same UUID).
- class AverageWorkflowRuntimeConfig#
Bases:
nat.data_models.evaluator.EvaluatorBaseConfigAverage workflow runtime per item (max timestamp - min timestamp).
- class AverageNumberOfLLMCallsConfig#
Bases:
nat.data_models.evaluator.EvaluatorBaseConfigAverage number of LLM calls per item (count of LLM_END).
- class AverageTokensPerLLMEndConfig#
Bases:
nat.data_models.evaluator.EvaluatorBaseConfigAverage total tokens per LLM_END event (prompt + completion if available).
- async register_avg_llm_latency_evaluator(
- config: AverageLLMLatencyConfig,
- builder: nat.builder.builder.EvalBuilder,
- async register_avg_workflow_runtime_evaluator(
- config: AverageWorkflowRuntimeConfig,
- builder: nat.builder.builder.EvalBuilder,
- async register_avg_num_llm_calls_evaluator(
- config: AverageNumberOfLLMCallsConfig,
- builder: nat.builder.builder.EvalBuilder,
- async register_avg_tokens_per_llm_end_evaluator(
- config: AverageTokensPerLLMEndConfig,
- builder: nat.builder.builder.EvalBuilder,