nat.eval.tunable_rag_evaluator.evaluate#
Attributes#
Classes#
Tunable RAG evaluator class with customizable LLM prompt for scoring. |
Functions#
|
This function generates a prompt for the judge LLM to evaluate the generated answer. |
|
Module Contents#
- logger#
- evaluation_prompt(
- judge_llm_prompt: str,
- question: str,
- answer_description: str,
- generated_answer: str,
- format_instructions: str,
- default_scoring: bool,
This function generates a prompt for the judge LLM to evaluate the generated answer.
- class TunableRagEvaluator(
- llm: langchain_core.language_models.BaseChatModel,
- judge_llm_prompt: str,
- llm_retry_control_params: dict | None,
- max_concurrency: int,
- default_scoring: bool,
- default_score_weights: dict,
Bases:
nat.eval.evaluator.base_evaluator.BaseEvaluator
Tunable RAG evaluator class with customizable LLM prompt for scoring.
- llm#
- judge_llm_prompt#
- llm_retry_control_params#
- default_scoring#
- default_score_weights#
- async evaluate_item( ) nat.eval.evaluator.evaluator_model.EvalOutputItem #
Compute RAG evaluation for an individual item and return EvalOutputItem