Providers

View as Markdown

AI agents typically need credentials to access external services: an API key for the AI model provider, a token for GitHub or GitLab, and so on. OpenShell manages these credentials as first-class entities called providers.

Create and manage providers that supply credentials to sandboxes.

Create a Provider

Providers can be created from local environment variables or with explicit credential values.

From Local Credentials

The fastest way to create a provider is to let the CLI discover credentials from your shell environment:

$openshell provider create --name my-claude --type claude --from-existing

This reads ANTHROPIC_API_KEY or CLAUDE_API_KEY from your current environment and stores them in the provider.

With Explicit Credentials

Supply a credential value directly:

$openshell provider create --name my-api --type generic --credential API_KEY=sk-abc123

Bare Key Form

Pass a key name without a value to read the value from the environment variable of that name:

$openshell provider create --name my-api --type generic --credential API_KEY

This looks up the current value of $API_KEY in your shell and stores it.

Manage Providers

List, inspect, update, and delete providers from the active cluster.

List all providers:

$openshell provider list

Inspect a provider:

$openshell provider get my-claude

Update a provider’s credentials:

$openshell provider update my-claude --type claude --from-existing

Delete a provider:

$openshell provider delete my-claude

Attach Providers to Sandboxes

Pass one or more --provider flags when creating a sandbox:

$openshell sandbox create --provider my-claude --provider my-github -- claude

Each --provider flag attaches one provider. The sandbox receives all credentials from every attached provider at runtime.

Providers cannot be added to a running sandbox. If you need to attach an additional provider, delete the sandbox and recreate it with all required providers specified.

Auto-Discovery Shortcut

When the trailing command in openshell sandbox create is a recognized tool name (claude, codex, or opencode), the CLI auto-creates the required provider from your local credentials if one does not already exist. You do not need to create the provider separately:

$openshell sandbox create -- claude

This detects claude as a known tool, finds your ANTHROPIC_API_KEY, creates a provider, attaches it to the sandbox, and launches Claude Code.

How Credential Injection Works

The agent process inside the sandbox never sees real credential values. At startup, the proxy replaces each credential with an opaque placeholder token in the agent’s environment. When the agent sends an HTTP request containing a placeholder, the proxy resolves it to the real credential before forwarding upstream.

This resolution requires the proxy to see plaintext HTTP. Endpoints must use protocol: rest in the policy (which auto-terminates TLS) or explicit tls: terminate. Endpoints without TLS termination pass traffic through as an opaque stream, and credential placeholders are forwarded unresolved.

Supported injection locations

The proxy resolves credential placeholders in the following parts of an HTTP request:

LocationHow the agent uses itExample
Header valueAgent reads $API_KEY from env and places it in a header.Authorization: Bearer <placeholder>
Header value (Basic auth)Agent base64-encodes user:<placeholder> in an Authorization: Basic header. The proxy decodes, resolves, and re-encodes.Authorization: Basic <base64>
Query parameter valueAgent places the placeholder in a URL query parameter.GET /api?key=<placeholder>
URL path segmentAgent builds a URL with the placeholder in the path. Supports concatenated patterns.POST /bot<placeholder>/sendMessage

The proxy does not modify request bodies, cookies, or response content.

Fail-closed behavior

If the proxy detects a credential placeholder in a request but cannot resolve it, it rejects the request with HTTP 500 instead of forwarding the raw placeholder to the upstream server. This prevents accidental credential leakage in server logs or error responses.

Example: Telegram Bot API (path-based credential)

Create a provider with the Telegram bot token:

$openshell provider create --name telegram --type generic --credential TELEGRAM_BOT_TOKEN=123456:ABC-DEF

The agent reads TELEGRAM_BOT_TOKEN from its environment and builds a request like POST /bot<placeholder>/sendMessage. The proxy resolves the placeholder in the URL path and forwards POST /bot123456:ABC-DEF/sendMessage to the upstream.

Example: Google API (query parameter credential)

$openshell provider create --name google --type generic --credential YOUTUBE_API_KEY=AIzaSy-secret

The agent sends GET /youtube/v3/search?part=snippet&key=<placeholder>. The proxy resolves the placeholder in the query parameter value and percent-encodes the result before forwarding.

Supported Provider Types

The following provider types are supported.

TypeEnvironment Variables InjectedTypical Use
claudeANTHROPIC_API_KEY, CLAUDE_API_KEYClaude Code, Anthropic API
codexOPENAI_API_KEYOpenAI Codex
genericUser-definedAny service with custom credentials
githubGITHUB_TOKEN, GH_TOKENGitHub API, gh CLI — refer to Github Sandbox
gitlabGITLAB_TOKEN, GLAB_TOKEN, CI_JOB_TOKENGitLab API, glab CLI
nvidiaNVIDIA_API_KEYNVIDIA API Catalog
openaiOPENAI_API_KEYAny OpenAI-compatible endpoint. Set --config OPENAI_BASE_URL to point to the provider. Refer to Configure.
opencodeOPENCODE_API_KEY, OPENROUTER_API_KEY, OPENAI_API_KEYopencode tool

Use the generic type for any service not listed above. You define the environment variable names and values yourself with --credential.

Supported Inference Providers

The following providers have been tested with inference.local. Any provider that exposes an OpenAI-compatible API works with the openai type. Set --config OPENAI_BASE_URL to the provider’s base URL and --credential OPENAI_API_KEY to your API key.

ProviderNameTypeBase URLAPI Key Variable
NVIDIA API Catalognvidia-prodnvidiahttps://integrate.api.nvidia.com/v1NVIDIA_API_KEY
Anthropicanthropic-prodanthropichttps://api.anthropic.comANTHROPIC_API_KEY
Basetenbasetenopenaihttps://inference.baseten.co/v1OPENAI_API_KEY
Bitdeer AIbitdeeropenaihttps://api-inference.bitdeer.ai/v1OPENAI_API_KEY
Deepinfradeepinfraopenaihttps://api.deepinfra.com/v1/openaiOPENAI_API_KEY
Groqgroqopenaihttps://api.groq.com/openai/v1OPENAI_API_KEY
Ollama (local)ollamaopenaihttp://host.openshell.internal:11434/v1OPENAI_API_KEY
LM Studio (local)lmstudioopenaihttp://host.openshell.internal:1234/v1OPENAI_API_KEY

Refer to your provider’s documentation for the correct base URL, available models, and API key setup. To configure inference routing, refer to Configure.

Next Steps

Explore related topics: