Set Up Task-Specific Sub-Agents#
OpenClaw documents the sub-agent behavior, sessions_spawn tool, agents.list configuration, tool policy, nesting, and auth model in Sub-Agents.
Use that page as the source of truth for how OpenClaw sub-agents work.
This NemoClaw page covers the sandbox-specific pieces: where the OpenClaw config lives, where to put per-agent credentials, which writable workspace path agents should use, and how the Omni VLM demo maps onto those paths.
NemoClaw Sandbox Paths#
NemoClaw runs OpenClaw inside an OpenShell sandbox. When adapting an OpenClaw sub-agent setup, use these paths inside the sandbox:
Path |
Purpose |
|---|---|
|
OpenClaw config, including |
|
Hash for |
|
Per-agent provider credentials. Use this when a sub-agent calls an auxiliary provider directly. |
|
Writable shared workspace path for files the primary agent passes to the sub-agent. |
|
OpenClaw gateway log. Use it to confirm config reloads and diagnose sub-agent failures. |
For file-based tasks, instruct agents to use /sandbox/.openclaw/workspace/.
Avoid relying on legacy .openclaw-data paths or read-only OpenClaw paths in delegation instructions.
Omni Vision Sub-Agent Example#
The vlm-demo applies the OpenClaw sub-agent pattern to a vision task.
It keeps the primary main agent on the normal NemoClaw inference route and adds a vision-operator sub-agent backed by an Omni vision model.
OpenClaw field |
Omni example value |
|---|---|
Primary agent |
|
Primary model |
|
Auxiliary provider |
|
Sub-agent |
|
Sub-agent model |
|
Delegation tool |
|
Omni is used as the specialist model for image tasks. The primary orchestration model remains responsible for conversation, planning, and deciding when to delegate.
Update the Sandbox Config#
Fetch the current OpenClaw config from the sandbox, patch it with your auxiliary provider and agents.list changes, then upload it back.
$ export SANDBOX=my-assistant
$ export DOCKER_CTR=openshell-cluster-nemoclaw
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- cat /sandbox/.openclaw/openclaw.json > /tmp/openclaw.json
Create /tmp/openclaw.updated.json with the OpenClaw sub-agent config.
For the Omni example, the demo provides vlm-demo/vlm-subagent/openclaw-patch.py.
Upload the patched config and refresh the hash.
In the default mutable state, this keeps the local hash consistent but does not make it tamper-proof; run nemoclaw <name> shields up afterward if the sandbox should enforce config integrity at startup.
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 644 /sandbox/.openclaw/openclaw.json
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 644 /sandbox/.openclaw/.config-hash
$ cat /tmp/openclaw.updated.json | docker exec -i "$DOCKER_CTR" kubectl exec -i -n openshell "$SANDBOX" -c agent -- sh -c 'cat > /sandbox/.openclaw/openclaw.json'
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- /bin/bash -c "cd /sandbox/.openclaw && sha256sum openclaw.json > .config-hash"
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 444 /sandbox/.openclaw/openclaw.json
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 444 /sandbox/.openclaw/.config-hash
Check /tmp/gateway.log after upload and confirm the gateway hot-reloaded the provider or agents.list change.
Add Sub-Agent Credentials#
If the auxiliary model uses a provider key outside the normal NemoClaw inference route, put that key in the sub-agent auth profile. For the Omni example:
/sandbox/.openclaw/agents/vision-operator/agent/auth-profiles.json
Use the same provider ID that appears in models.providers, such as nvidia-omni.
After uploading the auth profile, make sure the sub-agent directory is owned by the sandbox user:
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chown -R sandbox:sandbox /sandbox/.openclaw/agents/vision-operator
Allow Auxiliary Provider Egress#
If the sub-agent calls a provider directly, update the OpenShell network policy for the binary that makes the request.
In the Omni demo, the OpenClaw gateway runs as /usr/local/bin/node, so the NVIDIA endpoint policy must allow that binary.
Refer to Customize the Network Policy for policy update workflows.
Add Delegation Instructions#
OpenClaw handles sessions_spawn, but the primary agent still needs task instructions.
Place those instructions in the writable workspace, for example:
/sandbox/.openclaw/workspace/TOOLS.md
The Omni demo includes vlm-demo/vlm-subagent/TOOLS.md, which tells main to delegate image tasks to vision-operator and tells the sub-agent to read the image path it receives.
Adapt that file for other task-specific models.
Demo Assets#
Use the vlm-demo repository for runnable Omni example assets:
vlm-subagent-guide.mdfor a command-by-command walkthrough.vlm-subagent/openclaw-patch.pyfor patchingopenclaw.json.vlm-subagent/auth-profiles.template.jsonfor the sub-agent auth profile.vlm-subagent/TOOLS.mdfor delegation instructions.
Next Steps#
Use the following resources for more information:
Refer to OpenClaw Sub-Agents for
sessions_spawn,agents.list, nesting, tool policy, and auth behavior.Refer to Switch Inference Providers to change the primary orchestration model instead of adding a sub-agent model.
Refer to Workspace Files to understand per-agent workspace directories.