Configure Inference Routing

View as Markdown

This page covers the managed local inference endpoint (https://inference.local). External inference endpoints go through sandbox network_policies. Refer to Policies for details.

The configuration consists of three values:

ValueDescription
Provider recordThe credential backend OpenShell uses to authenticate with the upstream model host.
Model IDThe model to use for generation requests.
TimeoutPer-request timeout in seconds for upstream inference calls. Defaults to 60 seconds.

For a list of tested providers and their base URLs, refer to Supported Inference Providers.

Create a Provider

Create a provider that holds the backend credentials you want OpenShell to use.

$openshell provider create --name nvidia-prod --type nvidia --from-existing

This reads NVIDIA_API_KEY from your environment.

Set Inference Routing

Point inference.local at that provider and choose the model to use:

$openshell inference set \
> --provider nvidia-prod \
> --model nvidia/nemotron-3-nano-30b-a3b

To override the default 60-second per-request timeout, add --timeout:

$openshell inference set \
> --provider nvidia-prod \
> --model nvidia/nemotron-3-nano-30b-a3b \
> --timeout 300

The value is in seconds. When --timeout is omitted (or set to 0), the default of 60 seconds applies.

Verify the Active Config

Confirm that the provider and model are set correctly:

$openshell inference get
$Gateway inference:
$
$ Provider: nvidia-prod
$ Model: nvidia/nemotron-3-nano-30b-a3b
$ Timeout: 300s
$ Version: 1

Update Part of the Config

Use update when you want to change only one field:

$openshell inference update --model nvidia/nemotron-3-nano-30b-a3b

Or switch providers without repeating the current model:

$openshell inference update --provider openai-prod

Or change only the timeout:

$openshell inference update --timeout 120

Use the Local Endpoint from a Sandbox

After inference is configured, code inside any sandbox can call https://inference.local directly:

1from openai import OpenAI
2
3client = OpenAI(base_url="https://inference.local/v1", api_key="unused")
4
5response = client.chat.completions.create(
6 model="anything",
7 messages=[{"role": "user", "content": "Hello"}],
8)

The client-supplied model and api_key values are not sent upstream. The privacy router injects the real credentials from the configured provider and rewrites the model before forwarding.

Some SDKs require a non-empty API key even though inference.local does not use the sandbox-provided value. In those cases, pass any placeholder such as test or unused.

Use this endpoint when inference should stay local to the host for privacy and security reasons. External providers that should be reached directly belong in network_policies instead.

When the upstream runs on the same machine as the gateway, bind it to 0.0.0.0 and point the provider at host.openshell.internal or the host’s LAN IP. 127.0.0.1 and localhost usually fail because the request originates from the gateway or sandbox runtime, not from your shell.

If the gateway runs on a remote host or behind a cloud deployment, host.openshell.internal points to that remote machine, not to your laptop. A locally running Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path. Ollama also supports cloud-hosted models that do not require local hardware.

Verify the Endpoint from a Sandbox

openshell inference set and openshell inference update verify the resolved upstream endpoint by default before saving the configuration. If the endpoint is not live yet, retry with --no-verify to persist the route without the probe.

openshell inference get confirms the current saved configuration. To confirm end-to-end connectivity from a sandbox, run:

$curl https://inference.local/v1/responses \
> -H "Content-Type: application/json" \
> -d '{
> "instructions": "You are a helpful assistant.",
> "input": "Hello!"
> }'

A successful response confirms the privacy router can reach the configured backend and the model is serving requests.

  • Gateway-scoped: Every sandbox using the active gateway sees the same inference.local backend.
  • HTTPS only: inference.local is intercepted only for HTTPS traffic.
  • Hot reload: Provider, model, and timeout changes are picked up by running sandboxes within about 5 seconds by default. No sandbox recreation is required.

Next Steps

Explore related topics:

  • To understand the inference routing flow and supported API patterns, refer to Index.
  • To follow a complete Ollama-based local setup, refer to Inference Ollama.
  • To follow a complete LM Studio-based local setup, refer to Local Inference Lmstudio.
  • To control external endpoints, refer to Policies.
  • To manage provider records, refer to Manage Providers.