How OpenShell Works
OpenShell is built around three stable runtime components: the CLI, the Gateway, and the Supervisor.
The CLI, SDK, and TUI provide user-facing access. The gateway is the control plane: it owns API access, state, policy and settings delivery, provider and inference configuration, and relay coordination. The supervisor runs inside every sandbox workload and is the local security boundary. It launches the agent as a restricted child process and enforces policy where process identity, filesystem access, network egress, and runtime credentials are visible.
Infrastructure-specific work sits behind integration boundaries. Compute, credentials, control-plane identity, and sandbox identity each have a driver or adapter boundary so OpenShell can integrate with native runtimes, secret stores, identity providers, and workload identity systems without moving those concerns into the core gateway or sandbox model.
Deployment Models
OpenShell can run on a single local machine or in a remote Kubernetes cluster. The CLI workflow stays the same: users point the CLI, SDK, or TUI at a gateway, and the gateway provisions sandboxes through its configured compute driver.
This deployment split keeps the runtime model consistent. Local deployments use the host’s container or VM runtime as the integrated infrastructure. Kubernetes deployments use the cluster scheduler, networking, secrets, identity, and GPU device plugins without changing the gateway and sandbox contract.
Core Components
Gateways and Sandboxes
The gateway and sandbox split control-plane authority from runtime enforcement. The gateway owns durable platform state: sandboxes, policy revisions, runtime settings, provider records, inference configuration, session records, and authorization decisions. A sandbox owns the local execution boundary: process identity, filesystem access, network egress, credential injection, local logs, and the agent child process.
The relationship is supervisor initiated. Each sandbox supervisor connects outbound to a known gateway endpoint, authenticates as a sandbox workload, and keeps a live session open for control traffic and relays. This avoids requiring every compute driver to solve gateway-to-sandbox reachability through pod IPs, bridge networks, port mappings, NAT traversal, or custom tunnels.
The gateway delivers desired state. The supervisor applies it locally, keeps last-known-good config when refresh fails, and leaves static isolation controls in place until the sandbox is recreated. Live operations such as config refresh, policy updates, credential delivery, log push, connect, exec, file sync, and relay setup use the same authenticated gateway-supervisor relationship.
Supervisor Protection Layers
The supervisor is the sandbox-local enforcement component. It starts before the agent process, prepares the sandbox runtime, fetches gateway configuration, and then launches the agent under the active policy.
Static controls such as filesystem and process isolation are established at sandbox start and require sandbox recreation to change. Dynamic controls such as network policy, credential delivery, and inference routing can refresh over the live gateway-supervisor session.
Ecosystem Integration
OpenShell integrates with infrastructure ecosystems instead of replacing them. Runtimes, schedulers, secret stores, identity providers, workload identity systems, image pipelines, storage, and GPU or device exposure remain owned by the platforms that provide them.
The gateway owns OpenShell control-plane semantics: sandbox state, lifecycle ordering, policy and settings resolution, credential mapping, authorization, inference configuration, and relay coordination. Drivers translate those semantics into platform-native operations.
The supervisor owns OpenShell sandbox semantics. Filesystem policy, process privilege reduction, network proxying, inference interception, credential injection, security logging, and gateway relay behavior stay consistent across Docker, Podman, Kubernetes, VM-backed sandboxes, and future integrations.