Common Issues
Quick-reference solutions for common issues. For detailed troubleshooting guides, see the Troubleshooting Overview.
CLI Issues
SSH Connection Fails
Symptom: ssh: connect to host ... port 22: Connection refused
Solutions:
- Verify the instance is running with
brev list - If you recently restarted the instance, run
brev refreshto update SSH config with the new IP - Check your network allows outbound SSH (port 22)
For persistent SSH authentication issues on macOS, see CLI SSH Authentication (macOS).
Instance not appearing in CLI
Instances created in the web console need to be synced to your local CLI.
Login Fails
Symptom: Browser authentication doesn’t complete
Solutions:
- Try
brev login --skip-browserand manually open the URL - For headless environments, use
brev login --token - Clear browser cookies for brev.nvidia.com
Authentication expired
If commands fail with auth errors, re-authenticate with the CLI.
Editor Won’t Open
Symptom: brev open fails with “command not found”
Solutions:
- Verify the editor is installed and in your PATH
- For VS Code: ensure
codecommand is available (install from Command Palette: “Shell Command: Install ‘code’ command in PATH”) - For Cursor/Windsurf: check application settings for CLI installation
For VS Code connection issues on Windows/WSL, see VS Code Connection (Windows/WSL).
Reset Brev CLI
If instances are missing or the CLI is behaving unexpectedly, force a refresh:
This updates the SSH config and ensures the daemon is running.
For Linux CLI installation issues, see CLI Installation (Linux).
Connectivity Issues
SSH connection refused after restart
Instance IP addresses may change after a restart. Run refresh to update your SSH config.
Port forward not working
Symptom: localhost:port returns “connection refused”
Solutions:
- Verify the service is running on the instance:
brev shell my-instancethen check the port - Check the port mapping is correct (local:remote)
- Ensure no local service is using the same port
Container and Docker Compose Issues
Debugging container build failures
If your container or Docker Compose build is failing, check the logs:
JupyterLab tunnel issues
If you can’t access JupyterLab and verified your container is running correctly, check the Cloudflare tunnel:
If your container or Docker Compose is setting up its own JupyterLab server, ensure it doesn’t conflict with the host JupyterLab tunnel.
Docker socket permission denied
If you get “permission denied while trying to connect to the Docker daemon socket”:
Then re-run your Docker command.
Viewing startup logs
If something went wrong during instance setup (e.g., empty project folder, failed git clone), check the startup logs:
This shows the full initialization sequence, including SSH key setup and repository cloning.
GPU Issues
GPU not available in container
Ensure you’re using the --gpus all flag when running Docker containers.
Verifying GPU setup
After launching an instance, verify the GPU is properly configured:
CUDA out of memory
Your model or batch size is too large for the GPU VRAM.
- Reduce batch size
- Use gradient checkpointing
- Use mixed precision (fp16/bf16)
- Upgrade to a GPU with more VRAM
For GPU detection issues with PyTorch and Unsloth on H100 instances, see GPU Detection and PyTorch Setup.
Instance Issues
For instances stuck in “Waiting” state, see Instance Stuck in Waiting State.