LLM-as-Judge Verification
LLM-as-Judge Verification
Use a second language model inside your resources server’s verify() when rewards depend on semantic equivalence, rubrics, or other judgments that are expensive or awkward to encode in deterministic code.
This tutorial is a beginner-first walkthrough. It gives you a minimal path that works first, then shows common production variants.
The walkthrough uses over_refusal_detection as its running example. By the end, you will:
- Understand where the judge runs in NeMo Gym.
- Wire judge model config in YAML.
- Call the judge from
verify()and parse strict verdict labels. - Handle failures without crashing verification.
Quick mental model
- The agent server orchestrates each rollout by calling the policy model server for inference and the resources server for tool execution and verification. Together they produce the full rollout.
- When the rollout ends, the resources server receives the output in
verify(). verify()may call a judge model to score semantic quality.- The judge’s text output gets parsed and returned as a response with a numeric
rewardfield — the RL training signal.
The judge is a verifier dependency — it is not the policy.
Prerequisites
- task-verification — especially What is LLM-as-a-judge?
- core-components — resources server vs. model server roles
- configuration-concepts — Hydra composition and server references
Architecture: where the judge runs
During rollout collection, the agent first calls the policy model. When the episode ends, the resources server runs verify(). An LLM judge is not the policy: it is an extra inference call started from inside verify(), after you have the model’s final output (and any verifier metadata from the JSONL line).
Typical in-repo pattern (Gym-internal): verify() uses self.server_client.post(..., url_path="/v1/responses", ...) to call a named model server declared in the same Hydra config. The judge therefore goes through NeMo Gym’s Responses API surface, same as rollouts.
Alternative pattern (external): some servers call an OpenAI-compatible chat.completions client pointed at URLs you supply (e.g. HPC or a separate cluster). proof_verification routes to external judges when JUDGE_SERVER_ARGS is set, and otherwise uses the internal /v1/responses path.
For how NeMo Gym sits next to GPUs and training frameworks, see Deployment Topology.
In production, the judge is typically a dedicated Gym model server — a separate responses_api_models entry in your Hydra config that can point at any OpenAI-compatible endpoint (a co-located vLLM instance, a remote cluster, or a managed API). For this walkthrough, we skip the separate model and reuse the same OpenAI endpoint for both the policy and the judge.
Walkthrough: over_refusal_detection
over_refusal_detection trains models to avoid over-refusing safe prompts (e.g., treating “How do I kill a Linux process?” as dangerous). The judge decides whether the policy model helpfully complied or inappropriately refused.
This walkthrough uses OpenAI gpt-4o-mini as both the policy and judge model — no GPUs required. It has two parts: first you’ll read through how the config and code work, then you’ll run it.
How it works
env.yaml: configure your API key
If you haven’t already, configure your OpenAI API key in env.yaml in the repository root:
Since we’re reusing the policy model as the judge, no extra endpoint fields are needed.
YAML config: declaring the judge
The resources server config points the judge at the policy model — judge_model_server.name: policy_model. Below is a simplified view of resources_servers/over_refusal_detection/configs/over_refusal_detection.yaml (the full judge prompt is truncated — see the full file for the complete template including worked examples).
The config file ships with a judge_model block that starts a dedicated judge server. In production, you can use a separate judge by setting judge_model_server.name: judge_model and pointing the judge_base_url / judge_api_key / judge_model_name variables at a different endpoint. This lets you use a different model, provider, or quota for the judge.
Since this walkthrough reuses policy_model as the judge, comment out the judge_model block as shown below — otherwise ng_run will start an unused server that still needs its variables to resolve.
Be sure to set judge_model_server.name to policy_model as well.
Key points:
judge_model_serverreferences a model server by name. Herepolicy_modelmeans the judge calls go through the same OpenAI endpoint used for rollouts.judge_responses_create_paramssets generation parameters for the judge call (temperature: 0.0for determinism).complied_label/refused_labelare specific toover_refusal_detection. Other servers define their own verdict labels — e.g.,equivalence_llm_judgeusesjudge_equal_label/judge_not_equal_label. The names and values are up to each server’s design.- The bare minimum config for any LLM-as-a-judge server is
judge_model_server(which model to call) andjudge_responses_create_params(how to call it). Everything else — prompt templates, verdict labels, reward values — is server-specific.
Building judge input and calling /v1/responses
Inside over_refusal_detection/app.py, the _evaluate_compliance method fills in the prompt template and posts to the judge. You don’t need to write this code to use the server — this is what happens under the hood when verify() runs:
Parsing strict labels and returning reward
The server looks for the configured verdict labels in the judge’s text. Whichever label appears first wins; if neither appears, the output is treated as ambiguous:
Back in verify(), the boolean maps directly to a configurable reward:
If you are building your own LLM-judge server, you will write similar code — the pattern above (fill template, POST to judge, parse labels, map to reward) is the same across all judge servers in the repo.
Try it
Start the servers:
In another terminal, collect rollouts against the 5-entry example dataset to confirm the judge call and reward parsing work end-to-end:
Inspect the output JSONL to verify that reward values are 0.0, 0.5, or 1.0 as expected. Once this looks right, scale to larger datasets and higher num_repeats.
To view the entire output:
When to use an LLM judge (and when not to)
Tradeoffs of LLM judges: extra latency and cost, non-determinism (unless you tune/constrain generation and parsing), and possible positional bias (judge favors text in a fixed slot). Some servers mitigate bias with a second pass that swaps gold vs. prediction (see equivalence_llm_judge).
Glossary (quick reference)
- Policy model: the model being trained/evaluated to produce task outputs.
- Judge model: a second model used inside
verify()for scoring. - Resources server: the environment server that manages state, executes tools, formats tool results into messages for the model, and runs verification to produce a reward.
- Verifier metadata: task-specific fields passed from JSONL into
verify(). - Internal judge call: call to a configured NeMo Gym model server via
/v1/responses. - External judge call: direct OpenAI-compatible call (often
/v1/chat/completions) to another endpoint.
Configuration: wiring the judge in YAML
Most LLM-judge servers expose fields along these lines (exact names vary by server; check that server’s configs/*.yaml and README.md):
Same server as policy: set name: to the policy model’s key (e.g. policy_model). Dedicated judge: add a second responses_api_models block in the merged config (e.g. judge_model) and set judge_model_server.name: judge_model. multichallenge documents this split in its YAML comments.
The over_refusal_detection config shown in the walkthrough above is a complete, working example. Here is a different server — equivalence_llm_judge — that uses a file-based prompt template and different verdict labels ([[A=B]] / [[A!=B]] instead of [[COMPLIED]] / [[REFUSED]]):
Model URLs, API keys, and model IDs for hosted backends belong in your merged Gym config (e.g. env.yaml and Hydra overrides), consistent with the rest of the project — not ad hoc environment variables, except where a specific server documents them (such as external judge routing).
Implementation: end-to-end verify() flow
Here is the full flow inside over_refusal_detection, condensed. Every Gym-internal LLM-judge server follows the same shape:
- Extract inputs — pull the task content and model output from the verify request.
- Build judge request — fill in the prompt template, assemble messages, copy generation params.
- POST to
/v1/responses— call the judge model server throughserver_client. - Parse verdict labels — find the first matching label in the judge’s text output.
- Map to reward — return a structured verify response with the numeric reward.
From over_refusal_detection/app.py, the verify() method orchestrates this:
The _request_judge helper handles HTTP errors and JSON parsing gracefully — on failure it returns (None, error_message) instead of raising, so verify() can map that to reward_if_unclear rather than crashing the server.
Other servers apply the same pattern with domain-specific variations. For example, multichallenge runs one judge call per rubric item via asyncio.gather, and equivalence_llm_judge adds an optional swap pass to detect positional bias.
Troubleshooting
Checklist
- Decide whether a deterministic verifier is enough; add a judge only where it buys clear signal.
- Add or reuse a model server for the judge; reference it from
judge_model_server. - Design prompts and parseable verdicts; handle judge failures gracefully.
- Set temperature / max tokens and concurrency for your SLA and budget.
- Smoke-test with
ng_runand your resources server’sdata/example.jsonl, then scale withng_collect_rollouts.
Done looks like:
- Judge call succeeds from
verify(). - Parsed labels map to reward as expected.
- Failures degrade to a clear fallback reward instead of server crashes.
See also
- task-verification — verification patterns and reward design
- Resources Server — role of
verify() - Deployment Topology — cluster layout and GPUs
- New Environment — scaffolding a new resources server