Remote Locations#
Overview#
This reference provides technical specifications for both manual SSH-based remote locations and Brev-integrated cloud instances.
Use this reference to understand connection requirements, authentication specifications, and platform-specific configuration details for all remote location types.
For conceptual understanding of remote locations and when to use them, see AI Workbench Locations.
For step-by-step instructions to add remote locations, see Add an Existing Remote Location.
Key Concepts#
- Connection Field:
A configuration parameter required in the remote location modal to establish SSH connectivity.
- SSH Tunnel:
A port-forwarding channel that maps remote Workbench services to localhost ports.
- Port Assignment:
The automatic allocation of localhost port numbers (10000+) for remote location tunnels.
- Context:
The CLI term for a location, stored in ~/.nvwb/contexts.json.
- SSH Agent:
A background service that caches decrypted private keys for password-protected SSH keys.
- Brev Integration:
The automated connection method using Brev CLI to manage cloud GPU instances.
- Instance Lifecycle:
The automatic start/stop behavior for Brev instances when opening/closing locations.
Requirements#
Existing Remote Machine Requirements#
System Requirements:
The existing remote machine system requirements are:
Operating System: Ubuntu 22.04 LTS or Ubuntu 24.04 LTS
AI Workbench: Must be installed on the remote system
User Account: A non-root user with sudo privileges
Network: SSH access from your local machine to the remote system
SSH Requirements:
The existing remote machine SSH requirements are:
Private-Public Key Pair: Password-based SSH authentication is not supported.
Private Key Location: Stored on your local machine, typically in ~/.ssh.
Public Key Location: Added to ~/.ssh/authorized_keys on the remote system for the target user.
Key Permissions: Private key must have 400 or 600 permissions.
Optional SSH Agent: Required only if using password-protected SSH keys.
Brev Integration Requirements#
For Brev integration, the requirements are:
NVIDIA account (or email to create one)
Credit card for Brev billing
AI Workbench Desktop App installed locally
Existing Remote Machines#
This section covers manually configured remote locations using SSH.
Connection Fields Reference#
When adding a remote location, you provide the following information:
Field |
Description |
|---|---|
Location Name |
A unique name for the remote location. This name appears in the Locations Manager and helps you identify which remote system you’re working with. Must be unique across all your locations (local and remote). |
Description |
A short description of the remote location. Use this to note the purpose, specifications, or any other identifying information about the system (e.g., “AWS g4dn.xlarge with T4 GPU for training”). |
Hostname or IP Address |
The remote computer’s IP address or hostname. This can be a static IP address (e.g., |
SSH Port |
The SSH port on the remote system. The default is port 22. If your system administrator has configured SSH to run on a different port for security reasons, specify that port here. |
SSH Username |
The username for the non-root user with sudo privileges on the remote system. This user must exist on the remote system and must be able to execute sudo commands. AI Workbench will connect as this user. |
SSH Private Key File |
The absolute path to the private key file on your local machine, typically stored in ~/.ssh. This file must have 400 or 600 permissions. The corresponding public key must be in the remote user’s ~/.ssh/authorized_keys file. |
SSH Public Key File |
(Optional) Only required when using a password-protected SSH key managed by SSH Agent. The absolute path to the public key file on your local machine, typically in ~/.ssh. When specified, AI Workbench uses the corresponding private key from SSH Agent. |
Workbench Directory |
The absolute path to the .nvwb directory on the remote system. Use the default ($HOME/.nvwb) unless you installed Workbench in a custom directory during the remote installation process. |
SSH Connection Details#
SSH Tunneling Architecture#
When you open a remote location, AI Workbench establishes an SSH connection to the remote system and creates two port-forwarding tunnels:
Service Tunnel (Port 10001 by default):
Maps the remote AI Workbench service to localhost on your local machine
Enables the Desktop App or CLI to communicate with the remote service
Uses the same GraphQL API as local locations
Proxy Tunnel (Port 10000 by default):
Maps the remote reverse proxy to localhost on your local machine
Routes web application traffic (JupyterLab, TensorBoard, etc.) to your local browser
Ensures all project applications are accessible as if they were running locally
Connection Persistence:
The SSH connection remains active as long as the remote location is open in the Desktop App or activated in the CLI. Closing all windows connected to the location or running nvwb deactivate terminates the SSH tunnels and disconnects from the remote location.
Port Assignment#
AI Workbench assigns ports dynamically when you add multiple remote locations:
First remote location: ports
10000and10001Second remote location: ports
10002and10003Third remote location: ports
10004and10005
Port assignments are stored in ~/.nvwb/contexts.json and persist across sessions. If you delete and re-add a location, it may receive different port numbers.
SSH Keys and Authentication#
Key-Based Authentication Requirements#
AI Workbench requires SSH key-based authentication for remote locations. Password-based authentication is not supported.
Key Pair Components:
The key pair components are:
Private Key: Kept secret on your local machine (e.g., ~/.ssh/id_rsa or ~/.ssh/id_ed25519)
Public Key: Copied to the remote system’s ~/.ssh/authorized_keys file for the target user
Supported Key Types:
The supported key types are:
RSA (minimum
2048bits,4096recommended)ED25519 (recommended for modern systems)
ECDSA (Elliptic Curve Digital Signature Algorithm)
Key Permissions:
The private key file must have restrictive permissions:
Linux/macOS:
400(read-only for owner) or600(read-write for owner)Windows: Managed automatically by the Windows SSH client
Password-Protected SSH Keys#
Definition:
A password-protected SSH key (also called a passphrase-protected key) is encrypted with a passphrase. Each time the key is used, you must provide the passphrase to decrypt it.
When to Use:
Password-protected keys are appropriate when:
Company security policies require passphrase protection
The private key is stored on a shared or less secure system
You want defense-in-depth security for accessing production systems
Compliance requirements mandate encrypted key storage
SSH Agent Usage:
SSH Agent is a service that stores decrypted private keys in memory, eliminating the need to enter the passphrase repeatedly. When using password-protected keys with AI Workbench:
You must configure SSH Agent before adding the remote location
Add your private key to SSH Agent using
ssh-addWhen adding the location in AI Workbench, specify the public key file (.pub), not the private key
AI Workbench retrieves the decrypted private key from SSH Agent as needed
Platform-Specific SSH Agent:
Windows: OpenSSH Authentication Agent service (must be started manually)
macOS: SSH Agent starts automatically when needed
Ubuntu: ssh-agent service (may need to be started with
systemctl)
Brev Integration#
This section covers Brev-integrated cloud GPU instances.
What is Brev#
NVIDIA Brev is a broker platform that helps you find and provision GPU-enabled instances on a variety of cloud providers.
For complete information about the NVIDIA Brev platform, see the Brev documentation.
Integration Overview#
The AI Workbench Brev integration automates remote location setup and activation.
AI Workbench installs and uses the Brev CLI to handle authentication to the Brev platform, the SSH connection, and controlling the on/off state of the instance.
When you add a Brev instance as a location, AI Workbench will bootstrap install AI Workbench on the instance and enter the SSH information so that you don’t have to handle it. You do not need to manage SSH keys for the Brev integration - that is handled by the Brev CLI.
Workbench manages instance lifecycle automatically.
Once a Brev instance is added as a remote location, AI Workbench will stop/start the instance when you open/close the location through the Desktop App or CLI.
Lambda Labs Limitation#
Lambda Labs instances are not supported by the AI Workbench Brev integration.
Once started, Lambda Labs instances cannot be stopped. They can only be terminated.
AI Workbench does not support termination-only instances.
Impact:
The impact of the Lambda Labs limitation is:
You cannot add Lambda Labs instances from Brev as Workbench remote locations
Lambda Labs instances will not appear in the remote location modal
If you need Lambda Labs specifically, you must configure it as a standard remote location without using Brev
Billing Considerations#
Brev is a paid service.
For pricing and billing details, see the Brev documentation.
Always verify instances are stopped after use to avoid unexpected charges.
While AI Workbench automatically stops instances when you close all windows connected to a Brev location, issues with the connection or app crashes may prevent proper shutdown.
Best Practice: Manually verify instance status in the Brev console at brev.nvidia.com after closing Workbench locations.
NVIDIA Account Creation#
If you don’t have an NVIDIA account, the integration process will guide you through account creation:
You’ll receive a confirmation code via email
Create your account and password
Create your NVIDIA Cloud Account with a unique name
Log in to finalize the Brev integration