Frequently Asked Questions#

Installation#

Installing on Windows#

What Does AI Workbench Install on Windows and Where?

The Desktop App installs files in two locations on the Windows side:

  • C:\Users\<username>\AppData\Local\NVIDIA Corporation\AI Workbench

  • C:\Program Files\NVIDIA AI Workbench

The Full Local Install creates additional files:

  • The NVIDIA-Workbench WSL distribution with application binaries and other application files

  • C:\Users\<username>\AppData\Local\NVIDIA Corporation\AI Workbench Distro: Holds the virtual disk for the WSL distro

Does AI Workbench Install or Manage NVIDIA GPU Drivers on Windows?

No. You must install and manage the NVIDIA GPU drivers on your own. Install and manage drivers with the NVIDIA App.

Installing on macOS#

What Does AI Workbench Install on macOS and Where?

AI Workbench installs the following components on macOS:

  • The application binaries: nvwb-cli, wb-svc, and credential-manager into $HOME/.nvwb/bin.

  • Git and Git-LFS: Installed or updated system wide.

  • Docker Desktop or Podman: Installed or updated system wide.

  • Homebrew: Installed or updated system wide.

Can I Install the AI Workbench CLI Instead of the Desktop App on macOS?

Yes. On macOS you do not need to install the Desktop App. Installing with the CLI on macOS is the same as doing it on an Ubuntu remote. Follow Step Two in Remote Interactive Install.

Installing on Ubuntu Desktop#

What Does AI Workbench Install on Ubuntu Desktop and Where?

AI Workbench installs the following components on your system:

  • The application binaries: nvwb-cli, wb-svc, and credential-manager into $HOME/.nvwb/bin.

  • Git and Git-LFS: Installed or updated system wide.

  • Docker or Podman: Installed or updated system wide.

  • (If GPU): NVIDIA GPU Drivers, if not already installed.

  • (If GPU): NVIDIA Container Toolkit, installed or updated system wide.

Can I Install the AI Workbench CLI Instead of the Desktop App on Ubuntu Desktop?

Yes. On Ubuntu Desktop you do not need to install the Desktop App. Installing with the CLI on Ubuntu Desktop is the same as doing it on an Ubuntu remote. Follow Step Two in Remote Interactive Install.

What if I Have the AppImage Installed but Want to Switch to the Debian Package?

Uninstall the AppImage first, then install the Debian package. Your projects and locations are preserved. See Uninstall AI Workbench for uninstall steps.

Installing on Ubuntu Remote#

What Does AI Workbench Install on an Ubuntu Remote and Where?

AI Workbench installs the following components on the remote system:

  • The application binaries: nvwb-cli, wb-svc, and credential-manager into $HOME/.nvwb/bin.

  • Git: Installed or updated system wide.

  • Docker or Podman: Installed or updated system wide.

  • (If GPU): NVIDIA GPU Drivers, if not already installed.

  • (If GPU): NVIDIA Container Toolkit, installed or updated system wide.

Do I Install the Desktop App on a Remote?

No. The Desktop App is not supported on a remote system.

  • You cannot install on a remote with either the Debian package or the AppImage.

  • You must use the CLI for the remote install.

How Do I Do a Non-Interactive Install Using the CLI?

See Remote Interactive Install for non-interactive CLI installation steps.


Locations and SSH#

Remote Locations#

How Does the SSH Connection Work?

When you connect to a remote location, AI Workbench establishes two SSH tunnels that securely map ports from the remote system to your local machine.

One tunnel is dedicated to the AI Workbench service, while the other manages the reverse proxy to provide access from localhost.

By default, AI Workbench assigns ports starting at 10000 and 10001 for the proxy and service, respectively. These details are stored in the ~/.nvwb/contexts.json file. For more on how locations work, see AI Workbench Locations.

How Do I Create a Private-Public Key Pair for Remote Locations?

You create a key pair locally with ssh-keygen, then copy the public key to the remote system with ssh-copy-id. You need password-based SSH access to the remote system to complete this step.

For detailed steps, including platform-specific instructions for password-protected keys, see Manually Add an Existing Remote Location.

Can I Use a Password-Protected SSH Key for Remote Locations?

Yes. You can use a password-protected SSH key, but it requires using an SSH agent. See Using a Password-Protected Key for setup instructions.

Can I Use the AI Workbench CLI to Create a Remote Location?

Yes. Run nvwb create context and enter the relevant connection information.

NVIDIA Brev#

Is Brev a Cloud Provider?

No. Brev is not a cloud provider. Brev partners with major cloud providers to offer a single platform to deploy and manage GPU instances.

Is Brev Free?

No. Brev has a flexible, pay-as-you-go model where you only pay for the time you use an instance. You need a credit card to provision an instance.

Can I Configure the Brev Integration With the CLI?

Yes. You can configure the integration and create a Brev location using the CLI.

Use nvwb connect integration and select Brev from the drop down. Use nvwb create context --brev and select the instance name from the drop down.

For the full setup process, see Add a Brev Instance.

Why Doesn’t My Brev Instance Show Up in AI Workbench?

There are a few potential reasons:

  • You have not configured the Brev integration with that particular account.

  • You already created a location with that instance.

  • You are looking in the wrong Brev org in the remote location modal.

I See a Brev Instance in the Remote Location Modal, but I Can’t Add It. Why?

There are a few potential reasons:

  • The instance is not running.

  • The instance is a Lambda Labs instance and AI Workbench does not support using them as a remote location.

SSH Configuration#

Where Are the SSH Config Files?

On Ubuntu and macOS, AI Workbench writes to ~/.ssh/config.

On Windows, AI Workbench writes SSH config entries to both the Windows side and the WSL side.

  • Manual locations: Windows config is at C:\Users\<username>\.ssh\config. WSL config is at ~/.ssh/config in the NVIDIA-Workbench distro.

  • NVIDIA Sync locations: Sync writes the Windows config at C:\Users\<username>\.ssh\config. AI Workbench writes the WSL config at ~/.ssh/config in the NVIDIA-Workbench distro with remapped paths.

  • Brev locations: Imported from ~/.brev/ on the Windows side. WSL config uses an Include line pointing to ~/.brev/ssh_config.

Sync-added locations depend on Sync. NVIDIA Sync generates its own SSH key (nvsync.key) for the connection. If you remove a device from Sync, it deauthorizes that key on the remote. The AI Workbench location will fail to activate even though the SSH config entries remain. If this happens, remove the location from AI Workbench and re-add it manually with your own key.

For more details on remote location connection methods, see Remote Locations.

How Do SSH Keys Work on Windows?

You can select an SSH key from the Windows side or the WSL side. Windows-side keys are in C:\Users\<username>\.ssh\. WSL-side keys are in ~/.ssh/ inside the NVIDIA-Workbench distro.

If you select a Windows-side key, AI Workbench transfers it into the WSL distro so the SSH config entry on the WSL side can reference it.

Standard (non-password-protected) keys work from either side with no additional configuration.

Password-protected keys on Windows require:

  • The key to be on the Windows side

  • The Windows OpenSSH Authentication Agent service to be running

  • The key added to the agent with ssh-add

For full setup steps, see Using a Password-Protected Key.

What SSH Key Types Does AI Workbench Support?

AI Workbench supports the following SSH key types:

  • ED25519 — recommended for new keys

  • RSA — minimum 2048 bits, 4096 recommended

  • ECDSA — supported

Key permissions on Linux and macOS must be 400 (read-only) or 600 (read-write) for the owner. On Windows, the SSH client manages permissions automatically.


Projects#

Can I Manage Projects Using the CLI?

Yes. The CLI has feature parity with the Desktop App.

Why Would I Need a Multi-Container Environment?

Multi-container environments are useful when you have different services or applications that are difficult or impossible to run in a single container.

They are also useful to maintain isolation and test interactions between different services. Some examples include:

  • Building a web app with a database.

  • Using a full stack development environment.

  • CI/CD pipeline testing locally.

For setup instructions, see Use Multi-Container Environments.

IDEs in Projects#

How Do I Get to the VS Code Settings?

Select File > Preferences > Settings to open the settings tab. You can use the search bar to find the settings you need.

Can I Use Other IDEs Besides VS Code and JupyterLab?

Yes. An AI Workbench project is a Git repository of files, so you can use any IDE.

  • For IDEs besides VS Code and JupyterLab, you must manually configure remote SSH connections and attach to the project container.

  • Direct integration support is expanding. Check the current status in the table of IDEs.

Can I Use the CLI to Add and Configure VS Code?

Yes. The CLI has the same functionality as the Desktop App. For more information, see Visual Studio Code Integration.


Environments and Containers#

Do I Need to Know How to Use Containers to Use AI Workbench?

No. You do not need any container knowledge. However, understanding containers helps you understand how AI Workbench works. See AI Workbench Project Containers for an overview.

Can I Get Root Access in a Running Project Container?

No. The assigned user is workbench, and they do not have root access. Anything requiring sudo should be set up during the build through the post-build script. See Use the postBuild.bash Script.

Can I Get Root Access While Building the Container?

Yes. During build, the user has password-less sudo access, so you can install packages with apt and pip as the root user.

Why Does AI Workbench Use Containers Instead of Python Virtual Environments?

Virtual environments only isolate Python packages. AI development environments often require system-level dependencies, GPU drivers, CUDA toolkits, and non-Python services that virtual environments cannot manage. Containers provide full environment isolation and reproducibility.

Can I Use the CLI to Configure GPUs for the Project Container?

Yes. See Configure GPU Settings for Project Container for GPU configuration steps.

What Happens if I Request More GPUs Than Are Available?

AI Workbench will not run the container and will notify you of the discrepancy. You have the following options:

  • Reduce your number of requested GPUs.

  • Stop another project to free up GPUs.

  • Start the container with no GPUs.

When Should I Use a Host Mount?

Use a host mount when:

  • You have local files that you want to use in the container but do not want versioned. You can set the mount to be read-only to protect files from changes.

  • You are working in two locations, each of which has a common set of static files that you do not want to store on a Git remote.

  • You want to share data between different projects.

For detailed steps, see Manage Runtime Settings.

When Should I Use a Volume Mount?

Use a volume mount when:

  • You are creating data products that need to persist when the container is stopped or restarted but do not want versioned.

  • You want to share data between different containers in a project, such as in a multi-container project.

When Should I Use a Temp Mount?

Use a temp mount when something in your development environment creates a large number of files that you only need for a given session.

Can I Use the CLI to Create and Configure Mounts?

Yes. The CLI has the same functionality as the Desktop App for creating, configuring, and deleting mounts.

Use nvwb create mount to create a mount, nvwb configure mounts to set the source directory for host mounts, and nvwb delete mount to remove a mount.

For detailed steps, see Manage Runtime Settings.

Can I Manage a Compose-Based Application With the AI Workbench CLI?

Yes. The CLI has the same functionality as the Desktop App. See Use Multi-Container Environments for setup steps and Docker Compose Environments for background.


Version Control#

What Git Operations Does the Desktop App Support?

The Desktop App and CLI support a core set of Git operations including clone, commit, branch, merge, push, pull, fetch, and history. Operations like rebase, cherry-pick, stash, and interactive history editing are not available in the Desktop App or CLI.

AI Workbench does not block you from doing unsupported operations in the terminal or with Git clients. Changes made outside the Desktop App appear in the UI normally.

For the full list of supported operations, see How to Use Version Control. For Git configuration specifications, see Git Configuration Reference.

Can I Make Partial Commits in the Desktop App?

No. The Desktop App commits all selected changes in the working directory. There is no staging area exposed in the UI — all checked files are treated as staged.

To make partial commits, use git add and git commit in the terminal.

Does AI Workbench Auto-Generate Commit Messages?

Yes. AI Workbench generates commit messages listing changed files. The message is editable before confirming.

Example auto-generated message:

Updated files:
- src/preprocessing.py
- notebooks/analysis.ipynb
- data/dataset.csv

Edit the message to describe why changes were made, not just what changed.

Can I Use External Git Clients With AI Workbench?

Yes. AI Workbench uses standard Git under the hood with no proprietary extensions.

  • Commits made in terminal appear in AI Workbench History.

  • Branches created in terminal appear in the Branches section.

  • File changes are reflected in the Changes section.

  • Push/pull operations from terminal update AI Workbench state.

All external Git clients (VS Code, GitKraken, etc.) read and write the same .git directory.

Exception: Do NOT use git clone or git init outside AI Workbench. See below for why.

Are There Any Issues Mixing the Desktop App With Terminal Git?

Most operations work fine between AI Workbench and external clients.

Exceptions where mixing can cause issues:

  • Rebasing with conflicts: Complete the entire rebase operation in the terminal. AI Workbench may trigger merge resolution flows that will not work correctly.

  • Interactive operations: AI Workbench cannot participate in interactive Git operations (interactive rebase, interactive add, etc.).

  • Staging then committing in the Desktop App: The Desktop App ignores your index. Committing selected changes will override whatever you have staged in the terminal.

Can I Switch Branches With Uncommitted Changes?

No. AI Workbench requires a clean working directory before switching branches. Commit or discard changes first. You can also stash changes in the terminal and then switch.

What Are the Merge Conflict Resolution Options?

AI Workbench provides three resolution strategies:

  1. My Changes: Keep all changes from current branch, discard incoming changes. All-or-nothing per file.

  2. Their Changes: Accept all incoming changes, discard current branch changes. All-or-nothing per file.

  3. Edit Manually: Open files in editor to resolve conflicts line-by-line.

For fine-grained resolution, use Edit Manually. The other two strategies apply to the entire file. For the full procedure, see Resolve Merge Conflicts.

How Do I Use Git Manually on a Remote Project?

Git is installed in the project container, and all of your Git configuration is passed there.

  1. Open JupyterLab in the project container and use Git in the terminal there.

  2. Use Git versioning in VS Code if you have added it to the project.

  3. Use the AI Workbench CLI to attach your local terminal to the remote project container. See Interactive CLI Use for details.

Why Can’t I Clone a Project Using Standard Git Clone?

AI Workbench requires creation and tracking of project-specific metadata during cloning, which standard Git clone does not capture.

For example, AI Workbench creates an entry for the project in $HOME/.nvwb/inventory.json, as well as information about the container build and other environment metadata for the project. That information goes into the folder $HOME/.nvwb/project-runtime-info/<project-name>-<hash>/.

This metadata is required for AI Workbench to properly manage the project and provide various features.

How Do I Configure My Git Author in AI Workbench?

If you installed AI Workbench onto a machine that already had Git installed and configured, the global Git author should have been inherited automatically.

If that did not happen, there are two ways to configure the Git author:

  • You can see and set the Git author in AI Workbench Settings under Git Author.

  • The Git author is configured automatically if you add your local AI Workbench to a remote Git repository like GitHub or GitLab.


Integrations and Credentials#

Does Configuring an Integration Give NVIDIA Access to My Credentials or Accounts?

No. NVIDIA does not get any access to your credentials or accounts.

Configuring an integration provides the Desktop App on your system with the permissions to authenticate to the given platform or service. The Desktop App has no telemetry, so none of your information is sent to NVIDIA.

You can revoke this access at any time by disconnecting the integration.

What Kind of Access Does AI Workbench Need to GitHub or GitLab?

AI Workbench needs full API access to manage repositories on your behalf, including creating, modifying, and pushing changes to both public and private repositories.

How Does AI Workbench Store a PAT or API Key Used for an Integration?

AI Workbench stores credentials encrypted at rest using the host system’s secret storage tied to your login.

On macOS, credentials are stored in the Keychain. On Windows, they are stored using the Windows Credentials API. On Linux, credentials are stored using the dbus secret service, which is gnome-keyring by default with Ubuntu.

How Does AI Workbench Pass Credentials to a Location?

When you start a location using the Desktop App or CLI, the associated AI Workbench service is started and your credentials are pushed into memory for use by the service.

Depending on your container runtime, AI Workbench may write a temporary file or act as a credential helper to provide credentials to the runtime. If written to a file, the file is removed at service stop.

When using Git with AI Workbench, your credentials are provided to Git through a custom credential helper that does not write credentials to the file system.

Can I Use the CLI to Connect an Integration?

Yes. Run nvwb create integration to set up the integration, then nvwb connect integration <name> to authenticate. See How to Connect Integrations for detailed steps.

How Do I Disconnect an Integration?
  1. Open AI Workbench Settings.

  2. Select the Integrations page.

  3. Click Disconnect on the integration card you want to remove.

Do I Need an NGC API Key to Use AI Workbench?

No. The default containers used for project creation are public, so you do not need an API key to use them. You need an NGC API Key only if you want to pull containers or models from private registries on NGC.

See Connect the NVIDIA NGC Integration for setup instructions.

Do I Need an Endpoints API Key to Use Free Inference Endpoints?

Yes. To use NVIDIA’s free inference endpoints from build.nvidia.com in the Build Assistant, you need to connect the NVIDIA Endpoints integration with its own API key.

The Endpoints API key is separate from the NGC API key. You create it at the Build API-Key page. See Connect the NVIDIA Endpoints Integration for setup instructions.

What’s the Difference Between the NGC Integration and the Endpoints Integration?

The NGC integration and the Endpoints integration serve different purposes and use different API keys.

The NGC integration authenticates access to private container registries and model catalogs on ngc.nvidia.com. You create an NGC Personal Key with scopes for NGC Catalog and NVIDIA Private Registry.

The Endpoints integration authenticates access to free inference endpoints on build.nvidia.com. You create a separate API key at the Build console.


Applications and IDEs#

Does AI Workbench Require a GPU?

No. You can install and run AI Workbench on a CPU-only machine and get the same experience. When you need a GPU, connect your local AI Workbench to a remote system with GPUs.

What IDEs and File Editors Are Supported in AI Workbench?

An AI Workbench project is a Git repository, so you can use any IDE or file editor.

Attaching to the containerized environment requires additional configuration for most IDEs. VS Code does it by default.

IDE and Application Support:

  • VS Code — Project file access and container access on both local and remote by default.

  • Cursor — Project file access on local only. Container access requires manual configuration. See Use Cursor with AI Workbench for setup.

  • PyCharm — Project file access on local only. Container access requires manual configuration.

  • Local file editors — Project file access on local only. No container access.

  • System applications (e.g. Photoshop) — Project file access on local only. No container access.

Custom Certificate Support#

What if I Need to Use a Certificate That Is Not in the Host Certificate Store?

Either add it to the host certificate store, or manually configure the container runtimes and the individual project containers you want to use it in.

Can I Use Certificates on a Project-by-Project Basis Instead of Host-by-Host?

No. Per-project certificate configuration requires manual setup.

How Do I Add a Certificate to the Host Certificate Store?

For help managing system certificates, contact your IT security team.


Collaboration#

How Do I Create a Deep Link Badge?

You can add a badge to your repository’s README or any Markdown-based page to let users quickly open your AI Workbench project.

Open your project in AI Workbench and go to Settings > Share Project > Badge to get the Markdown code. For detailed steps, see Use a Clone Deep Link.

What Happens if a User Doesn’t Have AI Workbench Installed?

The deep link redirects to a web page that prompts the user to install AI Workbench. After installing, the user must click the deep link again — it does not resume automatically.