Configure Inference Routing
This page covers the managed local inference endpoint (https://inference.local). External inference endpoints go through sandbox network_policies. Refer to Policies for details.
The configuration consists of three values:
For a list of tested providers and their base URLs, refer to Supported Inference Providers.
Create a Provider
Create a provider that holds the backend credentials you want OpenShell to use.
NVIDIA API Catalog
OpenAI-compatible Provider
Local Endpoint
Anthropic
This reads NVIDIA_API_KEY from your environment.
Set Inference Routing
Point inference.local at that provider and choose the model to use:
To override the default 60-second per-request timeout, add --timeout:
The value is in seconds. When --timeout is omitted (or set to 0), the default of 60 seconds applies.
Verify the Active Config
Confirm that the provider and model are set correctly:
Update Part of the Config
Use update when you want to change only one field:
Or switch providers without repeating the current model:
Or change only the timeout:
Use the Local Endpoint from a Sandbox
After inference is configured, code inside any sandbox can call https://inference.local directly:
The client-supplied model and api_key values are not sent upstream. The privacy router injects the real credentials from the configured provider and rewrites the model before forwarding.
Some SDKs require a non-empty API key even though inference.local does not use the sandbox-provided value. In those cases, pass any placeholder such as test or unused.
Use this endpoint when inference should stay local to the host for privacy and security reasons. External providers that should be reached directly belong in network_policies instead.
When the upstream runs on the same machine as the gateway, bind it to 0.0.0.0 and point the provider at host.openshell.internal or the hostβs LAN IP. 127.0.0.1 and localhost usually fail because the request originates from the gateway or sandbox runtime, not from your shell.
If the gateway runs on a remote host or behind a cloud deployment, host.openshell.internal points to that remote machine, not to your laptop. A locally running Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path. Ollama also supports cloud-hosted models that do not require local hardware.
Verify the Endpoint from a Sandbox
openshell inference set and openshell inference update verify the resolved upstream endpoint by default before saving the configuration. If the endpoint is not live yet, retry with --no-verify to persist the route without the probe.
openshell inference get confirms the current saved configuration. To confirm end-to-end connectivity from a sandbox, run:
A successful response confirms the privacy router can reach the configured backend and the model is serving requests.
- Gateway-scoped: Every sandbox using the active gateway sees the same
inference.localbackend. - HTTPS only:
inference.localis intercepted only for HTTPS traffic. - Hot reload: Provider, model, and timeout changes are picked up by running sandboxes within about 5 seconds by default. No sandbox recreation is required.
Next Steps
Explore related topics:
- To understand the inference routing flow and supported API patterns, refer to Index.
- To follow a complete Ollama-based local setup, refer to Inference Ollama.
- To follow a complete LM Studio-based local setup, refer to Local Inference Lmstudio.
- To control external endpoints, refer to Policies.
- To manage provider records, refer to Manage Providers.