🔔 NVIDIA OpenShell is alpha software. APIs and behavior may change without notice. Do not use in production.

  • Get Started
    • Home
    • Quickstart
    • Tutorials
      • First Network Policy
      • GitHub Push Access
      • Inference with Ollama
      • Local Inference with LM Studio
  • About NVIDIA OpenShell
    • Overview
    • How It Works
    • Supported Agents
    • Release Notes
  • Sandboxes
    • About Gateways and Sandboxes
    • Sandboxes
    • Gateways
    • Providers
    • Policies
    • Community Sandboxes
  • Inference Routing
    • About Inference Routing
    • Configure Inference Routing
  • Observability
    • Overview
    • Accessing Logs
    • Logging
    • OCSF JSON Export
  • Reference
    • Gateway Auth
    • Default Policy
    • Policy Schema
    • Support Matrix
  • Security
    • Security Best Practices
  • Resources
    • License
Get Started

Tutorials

||View as Markdown|

Hands-on walkthroughs that teach OpenShell concepts by building real configurations. Each tutorial builds on the previous one, starting with core sandbox mechanics and progressing to production workflows.

First Network Policy

Create a sandbox, observe default-deny networking, apply a read-only L7 policy, and inspect audit logs. No AI agent required.

GitHub Push Access

Launch Claude Code in a sandbox, diagnose a policy denial, and iterate on a custom GitHub policy from outside the sandbox.

Inference with Ollama

Route inference through Ollama using cloud-hosted or local models, and verify it from a sandbox.

Local Inference with LM Studio

Route inference to a local LM Studio server via the OpenAI or Anthropic compatible APIs.

Edit this page
Previous

Quickstart

Next

Write Your First Sandbox Network Policy

NVIDIANVIDIA
Developer-friendly docs for your API
Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact

Copyright © 2026, NVIDIA Corporation.

LogoLogoOpenShell