nat.plugins.ragas.rag_evaluator.llm_adapter#

Classes#

NatLangChainRagasLLMAdapter

Expose a NAT-managed LangChain LLM through ragas' native LLM contract.

Module Contents#

class NatLangChainRagasLLMAdapter(
langchain_llm: object,
llm_name: str | None = None,
)#

Bases: ragas.llms.base.InstructorBaseRagasLLM

Expose a NAT-managed LangChain LLM through ragas’ native LLM contract.

Why this adapter exists instead of a new LLMFrameworkEnum entry:

  • Framework enums model agent/runtime ecosystems (LangChain, LlamaIndex, etc).

  • ragas’ InstructorBaseRagasLLM is a library-specific scoring interface, not a workflow framework.

  • Keeping the adaptation local avoids expanding global builder/registry surface area for a ragas-only concern while preserving the front-facing LLM configuration model.

_langchain_llm#
_llm_name = None#
_llm_context() str#
static _coerce_output(
result: object,
response_model: type[ragas.llms.base.InstructorTypeVar],
) ragas.llms.base.InstructorTypeVar#
_structured_llm(
response_model: type[ragas.llms.base.InstructorTypeVar],
) object#
generate(
prompt: str,
response_model: type[ragas.llms.base.InstructorTypeVar],
) ragas.llms.base.InstructorTypeVar#
async agenerate(
prompt: str,
response_model: type[ragas.llms.base.InstructorTypeVar],
) ragas.llms.base.InstructorTypeVar#