Single-Step Environment

View as Markdown

Build a complete environment end-to-end, from scaffolding to RL-ready rollouts.

Goal: Build a weather assistant environment with tool calling and verification.

Time: ~30 minutes | Cost: ~$0.05 (OpenAI API)

In this tutorial, you will:

  1. Scaffold a resource server and its paired agent configuration
  2. Prepare task data in JSONL format
  3. Implement a tool endpoint and verification logic (the reward function)
  4. Write unit tests for your tool and verify methods
  5. Run the servers, validate with a client, and collect rollouts
← Back to Environment Tutorials

Prerequisites

Complete Getting Started before starting.

Run all commands from the repository root directory (where pyproject.toml is located).


How It Works

NeMo Gym uses a decoupled three-component architecture: the Agent Server orchestrates the loop, the Model Server runs inference, and the Resources Server provides tools and verification. All three are async FastAPI servers communicating over HTTP, which allows many rollouts to run concurrently across episodes. See Environment Components for the full architecture and diagram.

In most cases, the Resources Server is where your changes go: define your tool endpoints and a verify() method that returns a reward. NeMo Gym ships several pre-built agent servers (simple_agent, swe_agents, etc.) and model servers (openai_model, vllm_model) that you can use as-is, or you can bring your own.


1. Scaffolding

Resource servers live in the resources_servers/ directory. Scaffold a weather server that provides weather information to models:

$ng_init_resources_server +entrypoint=resources_servers/my_weather_tool

This generates the following structure along with a paired simple agent configuration:

resources_servers/my_weather_tool/
+-- app.py # Main server implementation
+-- configs/
| +-- my_weather_tool.yaml # Configuration files
+-- data/
| +-- .gitignore # Data directory for examples/datasets
+-- tests/
| +-- test_app.py # Unit tests
+-- requirements.txt # Python dependencies
+-- README.md # Documentation

2. Task Preparation

Understanding the task is the first step in designing the environment itself.

Every environment starts with task data — the scenarios your model will practice on. Task data is stored in JSONL format (one JSON object per line), where each line represents a single training example. To get started, it’s not atypical for a domain-expert to hand-craft a few examples from scratch. Once the environment is developed and tested with these examples, you can scale up by collecting more data or using synthetic data generation using libraries like NeMo Data Designer.

JSONL Format

Each line contains a responses_create_params object with the conversation messages, tool definitions, and any ground-truth metadata needed for verification:

1{
2 "responses_create_params": {
3 "input": [
4 {"role": "system", "content": "You are a helpful weather assistant."},
5 {"role": "user", "content": "What's the weather in San Francisco?"}
6 ],
7 "tools": [
8 {
9 "type": "function",
10 "name": "get_weather",
11 "description": "Get weather for a city.",
12 "parameters": {
13 "type": "object",
14 "properties": {"city": {"type": "string", "description": "City name"}},
15 "required": ["city"],
16 "additionalProperties": false
17 },
18 "strict": true
19 }
20 ],
21 "parallel_tool_calls": false
22 }
23}
FieldDescription
responses_create_paramsOpenAI Responses API-compatible input
responses_create_params.inputConversation messages (system, user, assistant)
responses_create_params.toolsAvailable tools/functions for the agent
responses_create_params.parallel_tool_callsWhether the model may call multiple tools simultaneously. Set to false to force sequential tool calls — useful when tool outputs depend on each other.

Create Data

Create resources_servers/my_weather_tool/data/example.jsonl with five weather examples:

1{"responses_create_params": {"input": [{"role": "user", "content": "What's the weather in San Francisco?"}], "tools": [{"type": "function", "name": "get_weather", "description": "Get weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], "additionalProperties": false}, "strict": true}]}}
2{"responses_create_params": {"input": [{"role": "user", "content": "Tell me the weather in New York"}], "tools": [{"type": "function", "name": "get_weather", "description": "Get weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], "additionalProperties": false}, "strict": true}]}}
3{"responses_create_params": {"input": [{"role": "user", "content": "How's the weather in Seattle?"}], "tools": [{"type": "function", "name": "get_weather", "description": "Get weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], "additionalProperties": false}, "strict": true}]}}
4{"responses_create_params": {"input": [{"role": "user", "content": "What is the current weather in Boston?"}], "tools": [{"type": "function", "name": "get_weather", "description": "Get weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], "additionalProperties": false}, "strict": true}]}}
5{"responses_create_params": {"input": [{"role": "user", "content": "Can you check the weather in Chicago?"}], "tools": [{"type": "function", "name": "get_weather", "description": "Get weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"], "additionalProperties": false}, "strict": true}]}}

3. Environment Design

This section covers the key aspects of building the environment itself: building or using an existing Agent server, creating the Resources Server, and writing tool and verification logic.

3.1 Agent Server

While this tutorial is about a single-step environment, it still can use the built-in simple_agent, which handles even multi-step tool calling out of the box. No custom agent code is needed. Here is simplified pseudocode showing the core flow (actual implementation):

1# run() — episode lifecycle
2async def run(self, request, body):
3 await resources_server.seed_session(body) # initialize env state
4 response = await self.responses(body) # multi-step agent loop
5 return await resources_server.verify(response) # compute reward
6
7# responses() — multi-step tool loop
8async def responses(self, body):
9 while True:
10 model_response = await model_server.responses(conversation)
11 tool_calls = [o for o in model_response.output if o.type == "function_call"]
12
13 if not tool_calls: # model produced a final text response
14 break
15
16 for call in tool_calls:
17 result = await resources_server.post(f"/{call.name}", call.arguments)
18 conversation.append(result)
19
20 return model_response

This tutorial uses simple_agent. For other patterns (multi-turn correction, custom orchestration), see the other agents in responses_api_agents/, or build your own by extending SimpleResponsesAPIAgent.

3.2 Resources Server

While the agent handles orchestration, the Resources Server is where you define what makes your environment unique. It is the backbone of tool-based interactions in NeMo Gym.

It provides:

  • Tool implementations — APIs that models can call
  • Verification logic — reward computation for RL
  • Session state — per-episode state management (for stateful environments)

Some agents may come with predefined tools, and you can use the Resources Server to supplement them with additional external tools. When building a new environment, prefer defining tools in the Resources Server rather than the Agent Server. This separation lets multiple agents share the same tool logic without duplicating it.

Open resources_servers/my_weather_tool/app.py and implement:

1from fastapi import FastAPI
2from pydantic import BaseModel
3
4from nemo_gym.base_resources_server import (
5 BaseResourcesServerConfig,
6 BaseVerifyRequest,
7 BaseVerifyResponse,
8 SimpleResourcesServer,
9)
10
11# 1. Define the server configuration
12class MyWeatherToolResourcesServerConfig(BaseResourcesServerConfig):
13 """Configuration for the weather resource server."""
14
15 pass
16
17# 2. Define request and response schemas for your tools
18class GetWeatherRequest(BaseModel):
19 """Request schema for getting weather information."""
20
21 city: str
22
23class GetWeatherResponse(BaseModel):
24 """Response schema for weather information."""
25
26 city: str
27 weather_description: str
28
29# 3. Implement the resource server
30class MyWeatherToolResourcesServer(SimpleResourcesServer):
31 config: MyWeatherToolResourcesServerConfig
32
33 def setup_webserver(self) -> FastAPI:
34 """Register API routes."""
35 app = super().setup_webserver()
36
37 # Register your tool endpoints
38 app.post("/get_weather")(self.get_weather)
39
40 return app
41
42 async def get_weather(self, body: GetWeatherRequest) -> GetWeatherResponse:
43 """
44 Tool implementation: Get weather for a city.
45
46 In a production implementation, this would call a weather API.
47 For this example, we return a simple static response.
48 """
49 return GetWeatherResponse(city=body.city, weather_description=f"The weather in {body.city} is cold.")
50
51 async def verify(self, body: BaseVerifyRequest) -> BaseVerifyResponse:
52 """Evaluate rollout and return a reward. See Verification Logic below."""
53 ...
54
55if __name__ == "__main__":
56 MyWeatherToolResourcesServer.run_webserver()

Key Components

ComponentPurpose
Configuration ClassExtends BaseResourcesServerConfig; holds server-specific settings
Request/Response SchemasPydantic models defining the API contract
setup_webserver()Registers FastAPI routes for your tools
Tool MethodsAsync functions implementing tool logic
verify()Required — evaluates task performance and returns a reward

3.3 Verification Logic

The verify() function is the heart of your RL environment — it computes the reward signal that drives model training. In this example, verification is simple: return 1.0 if the model called the get_weather tool, 0.0 otherwise. Real environments will have more sophisticated logic, but the principle is the same — inspect the model’s output and score it.

1async def verify(self, body: BaseVerifyRequest) -> BaseVerifyResponse:
2 # Check if the model called the get_weather tool
3 used_tool = False
4 for output in body.response.output:
5 if output.type == "function_call" and output.name == "get_weather":
6 used_tool = True
7 break
8
9 # Reward 1.0 if the model called the tool, 0.0 otherwise
10 reward = 1.0 if used_tool else 0.0
11 return BaseVerifyResponse(**body.model_dump(), reward=reward)

This example checks tool usage, not argument correctness. Jump to Advanced: Verification Patterns at the end of this tutorial for more examples.

Configure - Wiring the pieces together

Open resources_servers/my_weather_tool/configs/my_weather_tool.yaml. This file contains both the resource server and its paired simple agent configuration.

Update the domain field from other to agent:

1my_weather_tool_resources_server:
2 resources_servers:
3 my_weather_tool:
4 entrypoint: app.py
5 domain: agent # Change from 'other' to match your use case
6 verified: false
7 description: Single-step weather tool calling
8my_weather_tool_simple_agent:
9 responses_api_agents:
10 simple_agent:
11 entrypoint: app.py
12 resources_server:
13 type: resources_servers
14 name: my_weather_tool_resources_server
15 model_server:
16 type: responses_api_models
17 name: policy_model
18 datasets:
19 - name: example
20 type: example
21 jsonl_fpath: resources_servers/my_weather_tool/data/example.jsonl
22 # The scaffold also generates train/validation dataset entries
23 # with gitlab_identifier blocks. Those are omitted here since
24 # we only have example data at this stage.

The domain field categorizes your resource server and is required. Common values: math, coding, agent, knowledge, instruction_following, long_context, safety, games, e2e, other.

The domain is used for metrics grouping and dataset naming. Choose the category that best describes your task.

The agent entry references the resource server and model server by name, wiring all three components together.


4. Add Dependencies (Optional)

If your server needs external packages, add them to requirements.txt:

-e nemo-gym[dev] @ ../../
# Add any other dependencies here

5. Write Tests

Update resources_servers/my_weather_tool/tests/test_app.py to test your implementation:

1import pytest
2from unittest.mock import MagicMock
3from nemo_gym.server_utils import ServerClient
4from resources_servers.my_weather_tool.app import (
5 MyWeatherToolResourcesServer,
6 MyWeatherToolResourcesServerConfig,
7 GetWeatherRequest,
8)
9
10@pytest.fixture
11def server():
12 """Create a server instance for testing."""
13 config = MyWeatherToolResourcesServerConfig(
14 host="0.0.0.0",
15 port=8080,
16 entrypoint="",
17 name="my_weather_tool",
18 )
19 return MyWeatherToolResourcesServer(config=config, server_client=MagicMock(spec=ServerClient))
20
21@pytest.mark.asyncio
22async def test_get_weather(server):
23 """Test the get_weather tool."""
24 request = GetWeatherRequest(city="San Francisco")
25 response = await server.get_weather(request)
26
27 assert response.city == "San Francisco"
28 assert "cold" in response.weather_description.lower()
29
30def make_verify_request(output):
31 """Helper to build a BaseVerifyRequest with the given model output."""
32 from nemo_gym.base_resources_server import BaseVerifyRequest
33 from nemo_gym.openai_utils import NeMoGymResponse, NeMoGymResponseCreateParamsNonStreaming
34
35 return BaseVerifyRequest(
36 responses_create_params=NeMoGymResponseCreateParamsNonStreaming(
37 input=[{"role": "user", "content": "What's the weather?"}]
38 ),
39 response=NeMoGymResponse(
40 id="", object="response", created_at=0.0, model="",
41 output=output, tool_choice="auto", tools=[], parallel_tool_calls=False,
42 ),
43 )
44
45@pytest.mark.asyncio
46async def test_verify_with_tool_call(server):
47 """Reward 1.0 when the model called the tool."""
48 request = make_verify_request([
49 {"type": "function_call", "id": "c1", "call_id": "c1",
50 "name": "get_weather", "arguments": '{"city": "San Francisco"}'},
51 ])
52 response = await server.verify(request)
53 assert response.reward == 1.0
54
55@pytest.mark.asyncio
56async def test_verify_without_tool_call(server):
57 """Reward 0.0 when the model answered without using the tool."""
58 request = make_verify_request([
59 {"role": "assistant", "id": "",
60 "content": [{"type": "output_text", "annotations": [], "text": "It's cold."}]},
61 ])
62 response = await server.verify(request)
63 assert response.reward == 0.0

Run the tests:

$ng_test +entrypoint=resources_servers/my_weather_tool

For detailed test output:

$cd resources_servers/my_weather_tool
$source .venv/bin/activate
$pytest -v

6. Run & Validate

Run the Servers

Start the servers:

$config_paths="responses_api_models/openai_model/configs/openai_model.yaml,\
>resources_servers/my_weather_tool/configs/my_weather_tool.yaml"
$
$ng_run "+config_paths=[$config_paths]"

ng_run reads the config files and starts all three components from the architecture diagram:

  1. Agent Server (my_weather_tool_simple_agent) — the simple_agent that orchestrates the seed → model → tool → verify loop
  2. Model Server (openai_model) — proxies LLM inference requests to the OpenAI API
  3. Resources Server (my_weather_tool_resources_server) — serves your get_weather tool endpoint and verify() logic

Configure API Keys

Configure your OpenAI API key in env.yaml (located in the repository root). The env.yaml is never committed to Git and is designed to hold secrets like API keys:

1openai_api_key: ???
2policy_api_key: ${openai_api_key}
3policy_base_url: https://api.openai.com/v1
4policy_model_name: gpt-4o-mini

Set your API key as an environment variable before running the next command:

$export OPENAI_API_KEY="sk-your-key-here" # pragma: allowlist secret

Never commit API keys directly in YAML files.

If you don’t want to use the OpenAI API, you can try using a local vLLM server (requires GPU access) instead! See model-server-vllm.

Test with Client (Optional)

You can do a quick spot-check by pointing the built-in client at your agent. Inside responses_api_agents/simple_agent/client.py, change the server name to my_weather_tool_simple_agent, then run:

$python responses_api_agents/simple_agent/client.py

This client calls /v1/responses, which tests tool-calling but does not exercise the full episode lifecycle (seed_sessionresponsesverify). End-to-end validation happens during rollout collection below.

Collect Rollouts

Before training, you collect rollouts to validate that your environment works end-to-end and to profile and establish a baseline. Each rollout runs a task through the full agent loop (prompt → model → tool calls → verification) and records the complete interaction along with the reward. This serves two purposes:

  1. Validation — confirm your tools, verification logic, and data produce sensible rewards. If a strong model scores near zero, something is likely wrong with your environment.
  2. Baselining — measure pass rates across models to understand task difficulty before training begins.

With your servers still running, collect rollouts against your example inputs:

$ng_collect_rollouts +agent_name=my_weather_tool_simple_agent \
> +input_jsonl_fpath=resources_servers/my_weather_tool/data/example.jsonl \
> +output_jsonl_fpath=resources_servers/my_weather_tool/data/example_rollouts.jsonl \
> +limit=null \
> +num_repeats=null \
> +num_samples_in_parallel=null

Ensure your servers are running before collecting rollouts. The command processes each input example, runs it through the servers, and saves the complete interaction including tool calls and verification rewards to example_rollouts.jsonl.


7. Train with RL

Once you’ve collected rollouts and validated your environment, run training with your preferred RL framework:

8. Update Documentation

Update resources_servers/my_weather_tool/README.md with licensing and usage information:

1# My Weather Tool Resource Server
2
3A simple weather information resource server demonstrating tool calling.
4
5## Description
6
7This resource server provides a `get_weather` tool that returns weather information for cities.
8
9## Data
10
11- Example data: Five synthetic weather queries
12
13## Licensing Information
14
15**Code**: Apache 2.0
16
17**Data**: Apache 2.0 (synthetic examples)
18
19## Dependencies
20
21- nemo_gym: Apache 2.0

We’d love to see your contributions! Please make sure your PR includes accurate licensing information.


Summary

You’ve learned how to:

  • Initialize a resource server with ng_init_resources_server
  • Prepare task data in JSONL format
  • Implement tool endpoints and verification logic
  • Configure the required domain field and wire components together
  • Write and run tests
  • Run servers, validate with a client, and collect rollouts
  • Update documentation with licensing information

Multi-Step Environment →

Advanced: Verification Patterns

For tasks requiring multiple tool calls, define a custom verify request model to carry ground-truth data, then parse the final output to compute accuracy:

1from nemo_gym.base_resources_server import BaseVerifyRequest, BaseVerifyResponse
2
3class MultiStepVerifyRequest(BaseVerifyRequest):
4 """Custom request model that carries ground-truth data for verification."""
5
6 expected_values: list[int]
7
8async def verify(self, body: MultiStepVerifyRequest) -> BaseVerifyResponse:
9 """Extract and validate multi-step results."""
10 expected = body.expected_values # Available because we declared it above
11
12 # Parse the final tool call output
13 actual = []
14 for output in reversed(body.response.output):
15 if output.type == "function_call" and output.name == "submit_answer":
16 import json
17 actual = json.loads(output.arguments).get("values", [])
18 break
19
20 # Compute accuracy metrics
21 accuracy = expected == actual
22 set_overlap = len(set(actual) & set(expected)) / len(expected) if expected else 0
23
24 return BaseVerifyResponse(
25 **body.model_dump(),
26 reward=float(accuracy),
27 )

See resources_servers/example_multi_step/app.py for a complete example.

The custom request model (MultiStepVerifyRequest) is required for extra fields like expected_values to survive Pydantic parsing. Using BaseVerifyRequest directly would silently drop any fields not defined on the base class.

For tasks with multiple valid answers, use an LLM to judge correctness.

See resources_servers/math_with_judge/app.py for implementation details.

For code generation tasks, run unit tests against model output.

See resources_servers/code_gen/app.py for implementation details.


Troubleshooting

Domain validation error

If you encounter the error "A domain is required for resource servers", ensure the domain field is set in your config YAML file.

Import errors

Ensure you are running commands from the repository root directory and have installed dependencies:

$uv sync

Server does not start

Check that:

  • Port is not already in use
  • Configuration file syntax is valid YAML
  • All imports in app.py are correct

Tests fail

Ensure:

  • You are in the correct Python environment
  • All dependencies are installed
  • Test file imports match your actual file structure

Debugging server behavior

Check server status and logs:

$# View running servers
$ng_status
$
$# For detailed logs, run the server directly:
$cd resources_servers/my_weather_tool
$source .venv/bin/activate
$python app.py

Server logs appear in the terminal where ng_run was executed.