***
title: GPU Instances
description: >-
Learn about NVIDIA Brev GPU instances - lifecycle, states, data persistence,
and billing.
------------
Brev instances are GPU-equipped virtual machines preconfigured for AI/ML development. Each instance comes with Python, CUDA, Docker, and Jupyter preinstalled.
## What is a GPU Instance?
A GPU instance is a cloud virtual machine with one or more NVIDIA GPUs attached. Brev abstracts away the complexity of cloud setup, giving you instant access to configured development environments across multiple cloud providers.
* NVIDIA GPU(s) with CUDA drivers
* Python 3.10+ with pip
* Docker and Docker Compose
* JupyterLab
* SSH access
Brev supports a wide range of NVIDIA GPUs. Refer to [GPU Types](/reference/gpu-types) for the complete catalog with specifications and recommended use cases.
## Instance Lifecycle
Instances move through several states during their lifetime. Understanding these states helps you manage costs and data.
```
Create → Running ⇄ Stopped → Deleted
```
### Running
The instance is active and accessible. Brev bills you per hour for compute time. Connect with [SSH](/cli/connectivity), shell, or [VS Code](/guides/development-tools/vscode-setup).
### Stopped
When you stop an instance, Brev releases the GPU back to the cloud provider while preserving your data. You avoid compute charges, but your data remains bound to the original provider and region.
**What happens when you stop:**
* Brev releases the GPU to the provider's pool
* Your data in `/home/ubuntu/workspace` persists on the provider's storage
* No compute charges while stopped (minimal storage costs apply)
**Restarting a stopped instance:**
* Brev attempts to provision the same GPU type in the same provider and region
* If capacity is unavailable, the restart fails, and your data remains inaccessible
* You must wait for capacity or delete the instance (losing data)
Use [`brev start`](/cli/instance-management) to resume a stopped instance.
**Capacity risk:** GPU availability varies by provider and region. Popular GPU types frequently hit capacity limits. If you stop an instance and capacity becomes unavailable, you cannot access your data until capacity returns. Push important work to Git before stopping.
### When to Stop, When to Delete
Consider your situation before choosing:
| Scenario | Recommendation | Why |
| ----------------------------- | ----------------- | ------------------------------------------------- |
| Short break (hours) | Stop | Likely to get same capacity back. |
| Overnight or weekend | Stop with caution | Push work to Git first; capacity may change. |
| Extended break (days or more) | Delete | Avoid storage costs and capacity lock-in. |
| Switching GPU types | Delete | Stopped instances cannot change GPU type. |
| Need maximum flexibility | Delete | No provider or region constraints on next launch. |
### Deleted
Brev permanently deletes (removes) the instance and all data. You cannot undo this action.
## Data Persistence
Understanding where data persists helps you avoid losing work.
| Location | Persists on Stop? | Persists on Delete? |
| ------------------------ | ----------------- | ------------------- |
| `/home/ubuntu/workspace` | Yes | No |
| `/tmp` | No | No |
| System packages | Yes | No |
| Docker images/containers | Yes | No |
**Best practice:** Store all your work in `/home/ubuntu/workspace` and push to a Git remote regularly. Use Docker volumes for persistent container data.
## Billing
* **Running:** Brev bills per hour based on GPU type
* **Stopped:** No compute charges (minimal storage costs apply)
* **Deleted:** No charges