Test With nat_test_llm for NVIDIA NeMo Agent Toolkit#
Use nat_test_llm to quickly validate workflows during development and CI. It yields deterministic, cycling responses and avoids real API calls. It is not intended for production use.
Prerequisites#
Install the testing plugin package:
uv pip install nvidia-nat-test
Minimal YAML#
The following YAML config defines a testing LLM and a simple chat_completion workflow that uses it.
llms:
main:
_type: nat_test_llm
response_seq: [alpha, beta, gamma]
delay_ms: 0
workflow:
_type: chat_completion
llm_name: main
system_prompt: "Say only the answer."
Save this as config.yml.
Run from the CLI#
nat run --config_file config.yml --input "What is 1 + 2?"
You should see a response corresponding to the first item in response_seq (for example, alpha). Repeated runs will cycle through the sequence (alpha, beta, gamma, then repeat).
Run programmatically#
from nat.runtime.loader import load_workflow
async def main():
async with load_workflow("config.yml") as workflow:
async with workflow.run("What is 1 + 2?") as runner:
result = await runner.result()
print(result)
Notes#
nat_test_llmis for development and CI only. Do not use it in production.To implement your own provider, see: Adding an LLM Provider.
For more about configuring LLMs, see: LLMs.