Quickstart#
Follow these steps to get started with NemoClaw and your first sandboxed OpenClaw agent.
Note
NemoClaw currently requires a fresh installation of OpenClaw.
Prerequisites#
Check the prerequisites before you start to ensure you have the necessary software and hardware to run NemoClaw.
Hardware#
Resource |
Minimum |
Recommended |
|---|---|---|
CPU |
4 vCPU |
4+ vCPU |
RAM |
8 GB |
16 GB |
Disk |
20 GB free |
40 GB free |
The sandbox image is approximately 2.4 GB compressed. During image push, the Docker daemon, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this combined usage can trigger the OOM killer. If you cannot add memory, configuring at least 8 GB of swap can work around the issue at the cost of slower performance.
Software#
Dependency |
Version |
|---|---|
Linux |
Ubuntu 22.04 LTS or later |
Node.js |
20 or later |
npm |
10 or later |
Container runtime |
Supported runtime installed and running |
Installed |
Container Runtime Support#
Platform |
Supported runtimes |
Notes |
|---|---|---|
Linux |
Docker |
Primary supported path today |
macOS (Apple Silicon) |
Colima, Docker Desktop |
Recommended runtimes for supported macOS setups |
macOS |
Podman |
Not supported yet. NemoClaw currently depends on OpenShell support for Podman on macOS. |
Windows WSL |
Docker Desktop (WSL backend) |
Supported target path |
π‘ Tip
For DGX Spark, follow the DGX Spark setup guide. It covers Spark-specific prerequisites, such as cgroup v2 and Docker configuration, before running the standard installer.
Install NemoClaw and Onboard OpenClaw Agent#
Download and run the installer script. The script installs Node.js if it is not already present, then runs the guided onboard wizard to create a sandbox, configure inference, and apply security policies.
$ curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
If you use nvm or fnm to manage Node.js, the installer may not update your current shellβs PATH.
If nemoclaw is not found after install, run source ~/.bashrc (or source ~/.zshrc for zsh) or open a new terminal.
When the install completes, a summary confirms the running environment:
ββββββββββββββββββββββββββββββββββββββββββββββββββ
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
ββββββββββββββββββββββββββββββββββββββββββββββββββ
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
ββββββββββββββββββββββββββββββββββββββββββββββββββ
[INFO] === Installation complete ===
Chat with the Agent#
Connect to the sandbox, then chat with the agent through the TUI or the CLI.
$ nemoclaw my-assistant connect
OpenClaw TUI#
The OpenClaw TUI opens an interactive chat interface. Type a message and press Enter to send it to the agent:
sandbox@my-assistant:~$ openclaw tui
Send a test message to the agent and verify you receive a response.
βΉοΈ Note
The TUI is best for interactive back-and-forth. If you need the full text of a long response (for example, large code generation output), use the CLI instead:
sandbox@my-assistant:~$ openclaw agent --agent main --local -m "<prompt>" --session-id <id>This prints the complete response directly in the terminal and avoids relying on the TUI view for long output.
OpenClaw CLI#
Use the OpenClaw CLI to send a single message and print the response:
sandbox@my-assistant:~$ openclaw agent --agent main --local -m "hello" --session-id test
Next Steps#
Switch inference providers to use a different model or endpoint.
Approve or deny network requests when the agent tries to reach external hosts.
Customize the network policy to pre-approve trusted domains.
Deploy to a remote GPU instance for always-on operation.
Monitor sandbox activity through the OpenShell TUI.
Troubleshooting#
If you run into issues during installation or onboarding, refer to the Troubleshooting guide for common error messages and resolution steps.