***

title: Quickstart
description: Install the OpenShell CLI and create your first sandboxed AI agent in two commands.
keywords: Generative AI, Cybersecurity, AI Agents, Sandboxing, Installation, Quickstart
position: 1
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.nvidia.com/openshell/get-started/llms.txt. For full documentation content, see https://docs.nvidia.com/openshell/get-started/llms-full.txt.

This page gets you from zero to a running, policy-enforced sandbox in two commands.

## Prerequisites

Before you begin, make sure you have:

* Docker Desktop running on your machine.

For a complete list of requirements, refer to [Support Matrix](/reference/support-matrix).

## Install the OpenShell CLI

Run the install script:

```shell
curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh
```

If you prefer [uv](https://docs.astral.sh/uv/):

```shell
uv tool install -U openshell
```

After installing the CLI, run `openshell --help` in your terminal to see the full CLI reference, including all commands and flags.

<Tip>
  You can also clone the [NVIDIA OpenShell GitHub repository](https://github.com/NVIDIA/OpenShell) and use the `/openshell-cli` skill to load the CLI reference into your agent.
</Tip>

## Create Your First OpenShell Sandbox

Create a sandbox and launch an agent inside it.
Choose the tab that matches your agent:

<Tabs>
  <Tab title="Claude Code">
    Run the following command to create a sandbox with Claude Code:

    ```shell
    openshell sandbox create -- claude
    ```

    The CLI prompts you to create a provider from local credentials.
    Type `yes` to continue.
    If `ANTHROPIC_API_KEY` is set in your environment, the CLI picks it up automatically.
    If not, you can configure it from inside the sandbox after it launches.
  </Tab>

  <Tab title="OpenCode">
    Run the following command to create a sandbox with OpenCode:

    ```shell
    openshell sandbox create -- opencode
    ```

    The CLI prompts you to create a provider from local credentials.
    Type `yes` to continue.
    If `OPENAI_API_KEY` or `OPENROUTER_API_KEY` is set in your environment, the CLI picks it up automatically.
    If not, you can configure it from inside the sandbox after it launches.
  </Tab>

  <Tab title="Codex">
    Run the following command to create a sandbox with Codex:

    ```shell
    openshell sandbox create -- codex
    ```

    The CLI prompts you to create a provider from local credentials.
    Type `yes` to continue.
    If `OPENAI_API_KEY` is set in your environment, the CLI picks it up automatically.
    If not, you can configure it from inside the sandbox after it launches.
  </Tab>

  <Tab title="OpenClaw">
    Run the following command to create a sandbox with OpenClaw:

    ```shell
    openshell sandbox create --from openclaw
    ```

    The `--from` flag pulls a pre-built sandbox definition from the [OpenShell Community](https://github.com/NVIDIA/OpenShell-Community) catalog.
    Each definition bundles a container image, a tailored policy, and optional skills into a single package.
  </Tab>

  <Tab title="Community Sandbox">
    Use the `--from` flag to pull other OpenShell sandbox images from the [NVIDIA Container Registry](https://registry.nvidia.com/).
    For example, to pull the `base` image, run the following command:

    ```shell
    openshell sandbox create --from base
    ```
  </Tab>
</Tabs>

## Deploy a Gateway (Optional)

Running `openshell sandbox create` without a gateway auto-bootstraps a local one.
To start the gateway explicitly or deploy to a remote host, choose the tab that matches your setup.

<Tabs>
  <Tab title="Brev">
    <Note>
      Deploy an OpenShell gateway on Brev by clicking **Deploy** on the [OpenShell Launchable](https://brev.nvidia.com/launchable/deploy/now?launchableID=env-3Ap3tL55zq4a8kew1AuW0FpSLsg).
    </Note>

    After the instance starts running, find the gateway URL in the Brev console under **Using Secure Links**.
    Copy the shareable URL for **port 8080**, which is the gateway endpoint.

    ```shell
    openshell gateway add https://<your-port-8080-url>.brevlab.com
    openshell status
    ```
  </Tab>

  <Tab title="DGX Spark">
    <Note>
      Set up your Spark with NVIDIA Sync first, or make sure SSH access is configured (such as SSH keys added to the host).
    </Note>

    Deploy to a DGX Spark machine over SSH:

    ```shell
    openshell gateway start --remote <username>@<spark-ssid>.local
    openshell status
    ```

    After `openshell status` shows the gateway as healthy, all subsequent commands route through the SSH tunnel.
  </Tab>
</Tabs>