nat.plugins.langchain.eval.tunable_rag_evaluator#
Attributes#
Classes#
Configuration for tunable RAG evaluator. |
|
Tunable RAG evaluator with customizable judge prompt. |
Functions#
|
Generate a prompt for the judge LLM. |
|
Wrap a runnable with retry controls. |
|
Register tunable RAG evaluator. |
Module Contents#
- logger#
- class TunableRagEvaluatorConfig#
Bases:
nat.data_models.evaluator.EvaluatorBaseConfigConfiguration for tunable RAG evaluator.
- llm_name: nat.data_models.component_ref.LLMRef = None#
- evaluation_prompt(
- judge_llm_prompt: str,
- question: str,
- answer_description: str,
- generated_answer: str,
- format_instructions: str,
- default_scoring: bool,
Generate a prompt for the judge LLM.
- runnable_with_retries(
- original_fn: collections.abc.Callable,
- llm_retry_control_params: dict | None = None,
Wrap a runnable with retry controls.
- class TunableRagEvaluator(
- llm: langchain_core.language_models.BaseChatModel,
- judge_llm_prompt: str,
- llm_retry_control_params: dict | None,
- max_concurrency: int,
- default_scoring: bool,
- default_score_weights: dict,
Bases:
nat.plugins.eval.evaluator.base_evaluator.BaseEvaluatorTunable RAG evaluator with customizable judge prompt.
- llm#
- judge_llm_prompt#
- llm_retry_control_params#
- default_scoring#
- default_score_weights#
- async _evaluate_item_core( ) nat.plugins.eval.data_models.evaluator_io.EvalOutputItem#
- async evaluate_item( ) nat.plugins.eval.data_models.evaluator_io.EvalOutputItem#
Each evaluator must implement this for item-level evaluation
- async evaluate_atif_item( ) nat.plugins.eval.data_models.evaluator_io.EvalOutputItem#
- async evaluate_atif_fn(
- atif_samples: nat.plugins.eval.evaluator.atif_evaluator.AtifEvalSampleList,
- async register_tunable_rag_evaluator(
- config: TunableRagEvaluatorConfig,
- builder: nat.builder.builder.EvalBuilder,
Register tunable RAG evaluator.