***

title: Tutorials
slug: tutorials
description: Step-by-step walkthroughs for OpenShell, from first sandbox to production-ready policies.
keywords: Generative AI, Cybersecurity, Tutorial, Sandbox, Policy
position: 1
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.nvidia.com/openshell/latest/llms.txt. For full documentation content, see https://docs.nvidia.com/openshell/latest/llms-full.txt.

Hands-on walkthroughs that teach OpenShell concepts by building real configurations. Each tutorial builds on the previous one, starting with core sandbox mechanics and progressing to production workflows.

<Cards>
  <Card title="First Network Policy" href="/tutorials/first-network-policy">
    Create a sandbox, observe default-deny networking, apply a read-only L7 policy, and inspect audit logs. No AI agent required.
  </Card>

  <Card title="GitHub Push Access" href="/tutorials/github-sandbox">
    Launch Claude Code in a sandbox, diagnose a policy denial, and iterate on a custom GitHub policy from outside the sandbox.
  </Card>

  <Card title="Inference with Ollama" href="/tutorials/inference-ollama">
    Route inference through Ollama using cloud-hosted or local models, and verify it from a sandbox.
  </Card>

  <Card title="Local Inference with LM Studio" href="/tutorials/local-inference-lmstudio">
    Route inference to a local LM Studio server via the OpenAI or Anthropic compatible APIs.
  </Card>
</Cards>