Set Up Task-Specific Sub-Agents#

OpenClaw documents the sub-agent behavior, sessions_spawn tool, agents.list configuration, tool policy, nesting, and auth model in Sub-Agents. Use that page as the source of truth for how OpenClaw sub-agents work.

This NemoClaw page covers the sandbox-specific pieces: where the OpenClaw config lives, where to put per-agent credentials, which writable workspace path agents should use, and how the Omni VLM demo maps onto those paths.

NemoClaw Sandbox Paths#

NemoClaw runs OpenClaw inside an OpenShell sandbox. When adapting an OpenClaw sub-agent setup, use these paths inside the sandbox:

Path

Purpose

/sandbox/.openclaw/openclaw.json

OpenClaw config, including models.providers, agents.defaults, and agents.list.

/sandbox/.openclaw/.config-hash

Hash for openclaw.json. Keep it in sync after manual config edits; it becomes a startup-enforced trust anchor only after shields up locks it root-owned and read-only.

/sandbox/.openclaw/agents/<agent-id>/agent/auth-profiles.json

Per-agent provider credentials. Use this when a sub-agent calls an auxiliary provider directly.

/sandbox/.openclaw/workspace/

Writable shared workspace path for files the primary agent passes to the sub-agent.

/tmp/gateway.log

OpenClaw gateway log. Use it to confirm config reloads and diagnose sub-agent failures.

For file-based tasks, instruct agents to use /sandbox/.openclaw/workspace/. Avoid relying on legacy .openclaw-data paths or read-only OpenClaw paths in delegation instructions.

Omni Vision Sub-Agent Example#

The vlm-demo applies the OpenClaw sub-agent pattern to a vision task. It keeps the primary main agent on the normal NemoClaw inference route and adds a vision-operator sub-agent backed by an Omni vision model.

OpenClaw field

Omni example value

Primary agent

main

Primary model

inference/nvidia/nemotron-3-super-120b-a12b

Auxiliary provider

nvidia-omni

Sub-agent

vision-operator

Sub-agent model

nvidia-omni/private/nvidia/nemotron-3-nano-omni-reasoning-30b-a3b

Delegation tool

sessions_spawn

Omni is used as the specialist model for image tasks. The primary orchestration model remains responsible for conversation, planning, and deciding when to delegate.

Update the Sandbox Config#

Fetch the current OpenClaw config from the sandbox, patch it with your auxiliary provider and agents.list changes, then upload it back.

$ export SANDBOX=my-assistant
$ export DOCKER_CTR=openshell-cluster-nemoclaw
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- cat /sandbox/.openclaw/openclaw.json > /tmp/openclaw.json

Create /tmp/openclaw.updated.json with the OpenClaw sub-agent config. For the Omni example, the demo provides vlm-demo/vlm-subagent/openclaw-patch.py.

Upload the patched config and refresh the hash. In the default mutable state, this keeps the local hash consistent but does not make it tamper-proof; run nemoclaw <name> shields up afterward if the sandbox should enforce config integrity at startup.

$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 644 /sandbox/.openclaw/openclaw.json
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 644 /sandbox/.openclaw/.config-hash
$ cat /tmp/openclaw.updated.json | docker exec -i "$DOCKER_CTR" kubectl exec -i -n openshell "$SANDBOX" -c agent -- sh -c 'cat > /sandbox/.openclaw/openclaw.json'
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- /bin/bash -c "cd /sandbox/.openclaw && sha256sum openclaw.json > .config-hash"
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 444 /sandbox/.openclaw/openclaw.json
$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chmod 444 /sandbox/.openclaw/.config-hash

Check /tmp/gateway.log after upload and confirm the gateway hot-reloaded the provider or agents.list change.

Add Sub-Agent Credentials#

If the auxiliary model uses a provider key outside the normal NemoClaw inference route, put that key in the sub-agent auth profile. For the Omni example:

/sandbox/.openclaw/agents/vision-operator/agent/auth-profiles.json

Use the same provider ID that appears in models.providers, such as nvidia-omni. After uploading the auth profile, make sure the sub-agent directory is owned by the sandbox user:

$ docker exec "$DOCKER_CTR" kubectl exec -n openshell "$SANDBOX" -c agent -- chown -R sandbox:sandbox /sandbox/.openclaw/agents/vision-operator

Allow Auxiliary Provider Egress#

If the sub-agent calls a provider directly, update the OpenShell network policy for the binary that makes the request. In the Omni demo, the OpenClaw gateway runs as /usr/local/bin/node, so the NVIDIA endpoint policy must allow that binary.

Refer to Customize the Network Policy for policy update workflows.

Add Delegation Instructions#

OpenClaw handles sessions_spawn, but the primary agent still needs task instructions. Place those instructions in the writable workspace, for example:

/sandbox/.openclaw/workspace/TOOLS.md

The Omni demo includes vlm-demo/vlm-subagent/TOOLS.md, which tells main to delegate image tasks to vision-operator and tells the sub-agent to read the image path it receives. Adapt that file for other task-specific models.

Demo Assets#

Use the vlm-demo repository for runnable Omni example assets:

  • vlm-subagent-guide.md for a command-by-command walkthrough.

  • vlm-subagent/openclaw-patch.py for patching openclaw.json.

  • vlm-subagent/auth-profiles.template.json for the sub-agent auth profile.

  • vlm-subagent/TOOLS.md for delegation instructions.

Next Steps#

Use the following resources for more information: