nat.experimental.test_time_compute.editing.llm_as_a_judge_editor#
Attributes#
Classes#
Given a list of PlanningItems, uses a feedback LLM to generate feedback on each plan |
Functions#
|
Register the LLMAsAJudgeEditor strategy with the provided configuration and builder. |
Module Contents#
- logger#
- class LLMAsAJudgeEditor( )#
Bases:
nat.experimental.test_time_compute.models.strategy_base.StrategyBaseGiven a list of PlanningItems, uses a feedback LLM to generate feedback on each plan Then edits the plan based on feedback.
- feedback_llm = None#
- editing_llm = None#
- async build_components(builder: nat.builder.builder.Builder) None#
Build the components required for the editor.
- supported_pipeline_types() [nat.experimental.test_time_compute.models.stage_enums.PipelineTypeEnum]#
Return the stage types supported by this selector.
- stage_type() nat.experimental.test_time_compute.models.stage_enums.StageTypeEnum#
Return the stage type of this strategy.
- async generate_feedback(
- llm,
- template,
- context: str,
- prompt: str,
- item: nat.experimental.test_time_compute.models.ttc_item.TTCItem,
Helper function to generate feedback for a given planning item using the provided prompt.
- async edit_plan(
- llm,
- template,
- context: str,
- prompt: str,
- item: nat.experimental.test_time_compute.models.ttc_item.TTCItem,
Helper function to edit a plan based on feedback using the provided prompt.
- async ainvoke(items: list[nat.experimental.test_time_compute.models.ttc_item.TTCItem], original_prompt: str | None = None, agent_context: str | None = None, \*\*kwargs) list[nat.experimental.test_time_compute.models.ttc_item.TTCItem]#
Edit the provided planning items using a feedback LLM.
- async register_llm_as_a_judge_editor(
- config: nat.data_models.ttc_strategy.TTCStrategyBaseConfig,
- builder: nat.builder.builder.Builder,
Register the LLMAsAJudgeEditor strategy with the provided configuration and builder.