Providers
AI agents typically need credentials to access external services: an API key for the AI model provider, a token for GitHub or GitLab, and so on. OpenShell manages these credentials as first-class entities called providers.
Create and manage providers that supply credentials to sandboxes.
Create a Provider
Providers can be created from local environment variables or with explicit credential values.
From Local Credentials
The fastest way to create a provider is to let the CLI discover credentials from your shell environment:
This reads ANTHROPIC_API_KEY or CLAUDE_API_KEY from your current environment
and stores them in the provider.
With Explicit Credentials
Supply a credential value directly:
Bare Key Form
Pass a key name without a value to read the value from the environment variable of that name:
This looks up the current value of $API_KEY in your shell and stores it.
Manage Providers
List, inspect, update, and delete providers from the active cluster.
List all providers:
Inspect a provider:
Update a provider’s credentials:
Delete a provider:
Attach Providers to Sandboxes
Pass one or more --provider flags when creating a sandbox:
Each --provider flag attaches one provider. The sandbox receives all
credentials from every attached provider at runtime.
Providers cannot be added to a running sandbox. If you need to attach an additional provider, delete the sandbox and recreate it with all required providers specified.
Auto-Discovery Shortcut
When the trailing command in openshell sandbox create is a recognized tool name (claude, codex, or opencode), the CLI auto-creates the required
provider from your local credentials if one does not already exist. You do not
need to create the provider separately:
This detects claude as a known tool, finds your ANTHROPIC_API_KEY, creates
a provider, attaches it to the sandbox, and launches Claude Code.
How Credential Injection Works
The agent process inside the sandbox never sees real credential values. At startup, the proxy replaces each credential with an opaque placeholder token in the agent’s environment. When the agent sends an HTTP request containing a placeholder, the proxy resolves it to the real credential before forwarding upstream.
This resolution requires the proxy to see plaintext HTTP. Endpoints must use protocol: rest in the policy (which auto-terminates TLS) or explicit tls: terminate. Endpoints without TLS termination pass traffic through as an opaque stream, and credential placeholders are forwarded unresolved.
Supported injection locations
The proxy resolves credential placeholders in the following parts of an HTTP request:
The proxy does not modify request bodies, cookies, or response content.
Fail-closed behavior
If the proxy detects a credential placeholder in a request but cannot resolve it, it rejects the request with HTTP 500 instead of forwarding the raw placeholder to the upstream server. This prevents accidental credential leakage in server logs or error responses.
Example: Telegram Bot API (path-based credential)
Create a provider with the Telegram bot token:
The agent reads TELEGRAM_BOT_TOKEN from its environment and builds a request like POST /bot<placeholder>/sendMessage. The proxy resolves the placeholder in the URL path and forwards POST /bot123456:ABC-DEF/sendMessage to the upstream.
Example: Google API (query parameter credential)
The agent sends GET /youtube/v3/search?part=snippet&key=<placeholder>. The proxy resolves the placeholder in the query parameter value and percent-encodes the result before forwarding.
Supported Provider Types
The following provider types are supported.
Use the generic type for any service not listed above. You define the
environment variable names and values yourself with --credential.
Supported Inference Providers
The following providers have been tested with inference.local. Any provider that exposes an OpenAI-compatible API works with the openai type. Set --config OPENAI_BASE_URL to the provider’s base URL and --credential OPENAI_API_KEY to your API key.
Refer to your provider’s documentation for the correct base URL, available models, and API key setup. To configure inference routing, refer to Configure.
Next Steps
Explore related topics:
- To control what the agent can access, refer to Policies.
- To use a pre-built environment, refer to the Community Sandboxes catalog.
- To view the complete field reference for the policy YAML, refer to the Policy Schema Reference.