Sandbox Compute Drivers
The gateway’s configured compute driver determines how OpenShell creates each sandbox. The CLI workflow stays the same across drivers: you create, connect to, inspect, and delete sandboxes through the gateway API.
Every compute driver runs the OpenShell supervisor inside the sandbox workload. The supervisor launches the agent process, applies policy, routes egress through the proxy, injects configured credentials, and maintains the gateway session.
Configure a Compute Driver
Configure the compute driver on the gateway. Current releases accept one driver per gateway:
You can also set the driver with OPENSHELL_DRIVERS. Supported values are docker, podman, kubernetes, and vm.
When --drivers and OPENSHELL_DRIVERS are unset, the gateway auto-detects Kubernetes, then Podman, then Docker by CLI availability or a local Unix socket. The VM driver is never auto-detected; configure it explicitly with --drivers vm.
Common gateway options:
Docker Driver
Docker-backed sandboxes run as containers on the gateway host. Use Docker for local development, single-machine gateways, and hosts that already use Docker Desktop or Docker Engine.
The gateway talks to the Docker daemon to create sandbox containers. Docker is also required for local image builds from directories or Dockerfiles.
For maintainer-level implementation details, refer to the Docker driver README.
For GPU-backed Docker sandboxes, configure Docker CDI before starting the gateway so OpenShell can detect the daemon capability.
Podman Driver
Podman-backed sandboxes run as rootless containers on the gateway host. Use Podman for Linux workstation workflows that avoid a rootful Docker daemon.
The gateway talks to the Podman API socket. The Podman driver requires Podman 5.x, cgroups v2, rootless networking, and an active Podman user socket.
For maintainer-level implementation details, refer to the Podman driver README and Podman networking notes.
MicroVM Driver
MicroVM-backed sandboxes run inside VM-backed isolation instead of a container boundary. Use MicroVM when workloads need a VM boundary instead of a local container boundary.
The gateway uses the VM compute driver to create VM-backed sandboxes. MicroVM requires host virtualization support. It uses libkrun with Apple’s Hypervisor framework on macOS, KVM on Linux, and QEMU for GPU-backed sandboxes on Linux.
For maintainer-level implementation details, refer to the VM driver README.
The gateway starts openshell-driver-vm over a private Unix socket and passes its process ID so the driver can reject unexpected local clients. The driver’s standalone TCP listener is disabled unless --allow-unauthenticated-tcp is set for local development.
Kubernetes Driver
Kubernetes-backed sandboxes run as pods in the configured sandbox namespace. Use Kubernetes for shared clusters, remote compute, GPU scheduling, and operator-managed environments.
Helm deployments set Kubernetes driver values through the chart.
For maintainer-level implementation details, refer to the Kubernetes driver README.
The Kubernetes driver creates namespaced agents.x-k8s.io/v1alpha1 Sandbox resources from the Kubernetes SIG Apps agent-sandbox project. The Agent Sandbox controller turns those resources into sandbox pods and related storage.