AI Workbench System
AI Workbench is a user-friendly tool designed to facilitate AI and data science development. It follows several key principles to help automate tedious tasks and let you get to work:
Portability and Reproducibility: Each project in AI Workbench runs in its own container, ensuring consistent results across different environments.
Automation of Common Tools: AI Workbench integrates with common tools like Git and Docker/Podman, automating workflows and streamlining development processes.
File-Centric: Most elements in AI Workbench are represented as files, so that you can manipulate, move, and export them as needed.
Flexibility: If you can’t perform a specific task directly within AI Workbench, you can often temporarily “break out” and perform the task manually using common tools or by manipulating files directly.
By adhering to these design principles, AI Workbench is a versatile and efficient platform for AI and data science development, catering to both technically savvy users and those new to programming.
Service
The AI Workbench Service, also called the server, is delivered as a single binary. This binary is cross-compiled for various operating systems and architectures. It runs on your host system as the logged in user, except on Windows where it runs in the WSL distro NVIDIA-Workbench
.
To interact with your AI Workbench projects, the server exposes a primary GraphQL API. By default, the server listens on http://localhost:10001
. When installed on a remote location, you can access the server from your primary system via an SSH tunnel, with the server mapped to a different port.
To explore the API, you can visit the API explorer at http://localhost:10001/v1/voyager
. This user-friendly interface allows you to view available queries, mutations, and subscriptions.
For a more interactive experience, you can use the API playground, accessible at http://localhost:10001/v1/
. This tool lets you test API queries and mutations directly.
To check the version of the AI Workbench server and confirm that it is running, visit http://localhost:10001/v1/version
.
CLI
The AI Workbench command line interface (CLI) is a powerful way to interact with the AI Workbench environment. Whether you’re a seasoned developer or just starting out, the CLI is designed to be user-friendly and accessible. For more information, see AI Workbench Command Line Interface (CLI) Reference.
Interactive and Non-Interactive Modes
You can run CLI commands in interactive mode or non-interactive mode. When you run a command without specifying all the required variables, the CLI prompts you to provide the missing information, and guides you through the process interactively. However, if you specify all required variables explicitly, the CLI runs the command immediately.
Rich Text Output or JSON Formatted Output
The CLI provides multiple output formats to suit your needs. If you’re using a shell, the rich text output automatically detects and adapts to your shell’s output capabilities. If you’re scripting or interacting with the CLI programmatically, the JSON formatted output is ideal. By default, the CLI uses rich text output, but you can easily switch to a different format by including the -o
flag.
Invoking the CLI
The CLI is invoked via a wrapper bash function that is installed by AI Workbench.
The wrapper function is the nvwb
command,
and enables the CLI to modify your shell prompt;
for example, to add a location or a project name.
If you are not using Bash or Zsh as your default shell,
add source ~/.local/share/nvwb/nvwb-wrapper.sh
to the configuration script of your shell.
Workflow
Typically you use the CLI interactively.
In the CLI, locations are called contexts.
You can use the activate command to connect the shell session to a context. All subsequent commands automatically refer to this context. After a context is activated, you can use the open command to open a project. All commands now automatically refer to this context and project.
When you are done with your work, close your projects, and run deactivate with the --shutdown
option to stop the AI Workbench server.
If you do not close your projects, and only close your terminal, project containers might be left running and consuming system resources.
Desktop app
The AI Workbench desktop application is a graphical user interface (GUI) that allows you to clone, create, manipulate, and run AI projects in a user-friendly way.
Integrated Installer
The AI Workbench installer ensures that your system is always configured correctly. If an update or change occurs (e.g. you uninstall your container runtime), you may see the installer open when AI Workbench starts to help you fix or update your system configuration.
Window and Location Conventions
Each AI Workbench window is connected to a single location (local or remote). You can have multiple windows open connected to multiple locations, and you can have multiple windows open connected to the same location. Additionally, you can have multiple projects open in a given window. However, you cannot have the same project open in the same location in two windows.
Managing AI Workbench Project Containers
When you open a project in a window, it opens in a new tab. To close a project tab, the container must be stopped. You are prompted to stop the project if it is running and you close the tab.
When you close a window, any open projects must be stopped before the projects can close, and then the window can close. If the window is the only window connected to a location, when it exits, it stops the associated AI Workbench Server.
Credential Manager
The Credential Manager is a small binary application that creates and stores credentials. It integrates with your system keyring on macOS, Windows, and Linux (using dbus). The Credential Manager contains the logic to perform OAuth2 login flows and automatically refreshes OAuth tokens in the background if applicable.
On Windows, this application runs on the native Windows side, while the service and CLI run in the WSL distro.
When connecting to remote locations, the CLI or desktop app automatically read your credentials from the Credential Manager and push them into memory in the remote server.
Additionally, the Credential Manager can also act as a Docker credential helper to seamlessly provide credentials stored in AI Workbench to Docker.
This feature helps users to manage their credentials efficiently and securely, only having to log in once, regardless of the location in which they are working.
NVIDIA GPU Drivers
If you have NVIDIA GPUs, you’ll need to have NVIDIA GPU drivers installed to take advantage of accelerated computing.
On Windows, you’ll need to install the drivers yourself, just as you would normally do. We recommend using the GeForce or RTX Experience apps to keep your drivers up to date automatically.
On Linux, if you don’t have the drivers installed when you run the AI Workbench installer, they can be installed for you. However, if you already have drivers installed, AI Workbench does not change them.
CUDA
You don’t need to install CUDA separately, as it is expected to be installed in the project containers. This makes it easier to work with different CUDA versions without having to manage or change anything.
You may need to update your drivers to work with some projects due to CUDA version support. Each driver version has a maximum supported version of CUDA. If the version of CUDA in a project is newer than your driver’s maximum supported version, you’ll need to update to use GPU acceleration with that project.
NVIDIA Container Toolkit
The NVIDIA Container Toolkit enables users to build and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.
AI workbench uses NVIDIA Container Toolkit to configure docker to use GPUs on Linux
AI workbench uses NVIDIA Container Toolkit to configure CDI when using Podman
AI Workbench utilizes the NVIDIA Container Toolkit to configure Docker on Linux systems to enable GPU acceleration.
AI Workbench sets up and uses the nvidia
runtime with Docker.
When using Podman, AI Workbench uses the NVIDIA Container Toolkit to generate CDI specs on Linux and Windows/WSL.
Docker is used by AI Workbench to build and run containers. On macOS and Windows, AI Workbench requires Docker Desktop, which is subject to Docker licensing requirements. On Ubuntu, AI Workbench uses the freely available Docker Engine.
Podman is another tool used by AI Workbench to build and run containers. AI Workbench runs Podman in rootless mode, which provides better isolation and security, though it may come with a slight performance hit in some cases. You have the choice of using either Docker or Podman.
Git is used to manage and version the files in a project. On macOS, AI Workbench uses the Git binary available via XCode developer tools.
Git LFS can be used to manage and version larger data files and models. Git LFS has its limitations, but in most cases can help keep a project repository performant and manageable without any extra services or configuration required.
Homebrew is used on macOS to install other 3rd Party Dependencies. If you already have Homebrew installed, AI Workbench uses it. If not, Homebrew is installed for you.
The AI Workbench server uses a working directory, sometimes called “the workbench directory” where internal files are stored. The default path for the working directory is $HOME/.nvwb
.
If you want to use a non-standard working directory, you can specify this path with the --workbench-dir
flag when using the CLI. You can also specify a non-standard working directory when configuring a location in the desktop app.
The working directory is structured as outlined below:
bin/: Directory containing the AI Workbench binaries
install/: Directory containing metadata files tracking installation/update operations and used by uninstall processes to only remove dependencies AI Workbench has installed
integrations/: Directory containing metadata files for integrations
logs/: Directory containing server, CLI, and proxy logs
proc/: Directory containing pidfiles and log files for background processes started by AI Workbench (e.g. server, SSH tunnels)
project-runtime-info: Directory containing files to track progress and output during the build and configuration stages <reference-projects-spec-runtime-files-link>
traefik-configs: Directory where dynamic configuration files are written for the reverse proxy <reference-proxy-dynamic-config-link>
cli.yaml: The CLI configuration file
config.yaml: The server configuration file
contexts.json: A file containing metadata about locations (contexts) configured on this host
inventory.json: A file containing metadata about the projects on this host
traefik.yaml: The reverse proxy static configuration file