AI Workbench System

AI Workbench is a user-friendly tool designed to facilitate AI and data science development. It follows several key principles to help automate tedious tasks and let you get to work:

  • Portability and Reproducibility: Each Project in AI Workbench runs in its own container, ensuring consistent results across different environments.

  • Automation of Common Tools: AI Workbench integrates with common tools like git and Docker/Podman, automating workflows and streamlining development processes.

  • File-Centric: Most elements in AI Workbench are represented as files, making it easy to manipulate, move, and export as needed.

  • Flexibility: If you can’t perform a specific task directly within AI Workbench, you can often temporarily “break out” and perform the task manually using common tools or by manipulating files directly.

By adhering to these design principles, AI Workbench provides a versatile and efficient platform for AI and data science development, catering to both technically savvy users and those new to programming.

Service

The AI Workbench Service, also called the server, is delivered as a single binary. This binary is cross-compiled for various operating systems and architectures. It runs on your host system as the logged in user, except on Windows where it runs in the WSL distro NVIDIA-Workbench.

To interact with your AI Projects, the server exposes a primary GraphQL API. By default, the server listens on http://localhost:10001. When installed on a remote location, you can access the server from your primary system via an SSH tunnel, with the server mapped to a different port.

To explore the API, you can visit the API explorer at http://localhost:10001/v1/voyager. This user-friendly interface allows you to view available queries, mutations, and subscriptions.

For a more interactive experience, you can use the API playground, accessible at http://localhost:10001/v1/. This tool lets you test API queries and mutations directly.

To check the version of the AI Workbench server and confirm that it is running, visit http://localhost:10001/v1/version.

CLI

The AI Workbench Command Line Interface (CLI) provides a powerful way to interact with the AI Workbench environment. Whether you’re a seasoned developer or just starting out, the CLI is designed to be user-friendly and accessible.

Interactive and Non-Interactive Modes

The CLI provides two modes of operation: interactive and non-interactive. This flexibility makes it suitable for various use cases. When you run a command without specifying all the required variables, the CLI will automatically switch to interactive mode. It will prompt you to provide the missing information, guiding you through the process. Conversely, if you specify all input parameters explicitly, the CLI will skip interactive prompting and execute the command directly. This allows you to script commands and automate tasks efficiently.

Rich Text Output or JSON Formatted Output

The CLI provides multiple output formats to suit your needs. If you’re using a shell, the rich text output will automatically detect and adapt to your shell’s output capabilities. If you’re scripting or interacting with the CLI programmatically, the JSON formatted output is ideal. By default, the CLI will use rich text output, but you can easily switch to a different format by adding the “-o” flag.

Invoking the CLI

The CLI is invoked via a wrapper bash function that is installed by AI Workbench. This wrapper function provides the nvwb command and allows the CLI to manipulate the PS1 environment variable of your shell to modify the prompt when needed. If you are not using the current supported Bash or Zsh shells, you may need to complete some manual configuration.

Conventions

Attention

In the CLI, “Locations” are called “Contexts”. In the future this terminology will merge and the nvwb list|create|delete context commands will be deprecated.

Most commands in the AI Workbench CLI require a specified context (location) and project to function properly. However, some commands do work globally.

When scripting, you will use the --context or -c flag to specify the context and the --project or -p flag to specify the project explicitly.

Typically though you will be using the CLI interactively. Here you can use the nvwb activate <context_name> command to “connect” the shell session to the context. All subsequent commands will automatically refer to this context. Once a context is activated, you can use the nvwb open command to open a project. All commands will now automatically refer to this context.

When you are done with your task, you can simply close your terminal, or run nvwb deactivate --shutdown to stop the AI Workbench server to which you are connected.

Important

If you do not stop your Projects and simply close your terminal, Project containers may be left running consuming system resources. nvwb deactivate --shutdown requires all Projects are stopped prior to server shutdown and can be a useful way to make sure everything is off when you are done working.

Getting Started

To discover the available commands and learn how to use the CLI, simply type nvwb -h. This will display a list of commands and their usage, making it easy to get started with the CLI.

Desktop app

The AI Workbench Desktop app provides a graphical user interface (GUI) that allows you to easily clone, create, manipulate, and run AI projects in a user-friendly way.

Integrated Installer

The AI Workbench installer ensures that your system is always configured correctly. If an update or change occurs (e.g. you uninstall your container runtime), you may see the installer open when AI Workbench starts to help you fix or update your system configuration.

Window and Location Conventions

Each AI Workbench window is connected to a single location (local or remote). You can have multiple windows open connected to multiple locations, and you can have multiple windows open connected to the same location. Additionally, you can have multiple projects open in a given window. However, you cannot have the same project open in the same location in two windows.

Managing AI Workbench and Project Containers

When you open a project in a window, it opens in a new tab. To close a project tab, the container must be stopped. You will be prompted to stop the project if it is running and you close the tab.

When you close a window, any open projects must be stopped before the projects can close, and then the window can close. If the window is the only window connected to a location, when it exits, it will stop the associated AI Workbench Server.

Credential Manager

The Credential Manager is a small, simple binary that provides the ability to create and store credentials. It integrates with your system keyring on macOS, Windows, and Linux (using dbus). The Credential Manager contains the logic to perform OAuth2 login flows and automatically refreshes OAuth tokens in the background if applicable.

On Windows, this application runs on the native Windows side, while the service and CLI will run in the WSL distro.

When connecting to remote locations, the CLI or Desktop app will automatically read your credentials from the Credential Manager and push them into memory in the remote server.

Additionally, the Credential Manager can also act as a Docker credential helper to seamlessly provide credentials stored in AI Workbench to Docker.

This feature helps users to manage their credentials efficiently and securely, only having to log in once, regardless of the location in which they are working.

NVIDIA GPU Drivers

If you have NVIDIA GPUs, you’ll need to have NVIDIA GPU drivers installed to take advantage of accelerated computing.

On Windows, you’ll need to install the drivers yourself, just as you would normally do. We recommend using the GeForce or RTX Experience apps to keep your drivers up to date automatically.

On Linux, if you don’t have the drivers installed when you run the AI Workbench installer, they can be installed for you. However, if you already have drivers installed, AI Workbench will not change them.

CUDA

You don’t need to install CUDA separately, as it is expected to be installed in the Project containers. This makes it easier to work with different CUDA versions without having to manage or change anything.

You may need to update your drivers to work with some Projects due to CUDA version support. Each driver version has a maximum supported version of CUDA. If the version of CUDA in a Project is newer than your driver’s maximum supported version, you’ll need to update to use GPU acceleration with that Project.

NVIDIA Container Toolkit

The NVIDIA Container Toolkit enables users to build and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

NVIDIA Container Toolkit page

  • AI workbench uses NVIDIA Container Toolkit to configure docker to use GPUs on Linux

  • AI workbench uses NVIDIA Container Toolkit to configure CDI when using Podman

AI Workbench utilizes the NVIDIA Container Toolkit to configure Docker on Linux systems to enable GPU acceleration. AI Workbench will set up and use the nvidia runtime with Docker, but in the near future will enable GPU use through CDI (Container Device Interface).

When using Podman, AI Workbench uses the NVIDIA Container Toolkit to generate CDI specs on Linux and Windows/WSL.

Docker is used by AI Workbench to build and run containers. On macOS and Windows, AI Workbench requires Docker Desktop, which may be subject to Docker licensing requirements. On Ubuntu, AI Workbench uses the freely available Docker Engine. AI Workbench can automatically install Docker Engine for you. However, if you’re using Docker Desktop, you’ll need to install it yourself. In the future, we plan to streamline this process further.

Podman is another tool used by AI Workbench to build and run containers. We run Podman in rootless mode, which provides better isolation and security, though it may come with a slight performance hit in some cases. You have the choice of using either Docker or Podman.

Git is used to manage and version the files in a Project. On macOS, AI Workbench uses the Git binary available via XCode developer tools.

Git LFS can be used to manage and version larger data files and models. Git LFS has its limitations, but in most cases can help keep a Project repository performant and manageable without any extra services or configuration required.

Homebrew is used on macOS to install other 3rd Party Dependencies. If you already have Homebrew installed, AI Workbench will simply use it. If not, Homebrew will be installed for you.

The AI Workbench server uses a working directory, sometimes called “the workbench directory” where internal files are stored. The default path for the working directory is $HOME/.nvwb.

If you want to use a non-standard working directory, you can specify this path with the --workbench-dir flag when using the CLI. You can also specify a non-standard working directory when configuring a Location in the desktop app.

The working directory is structured as outlined below:

  • bin/: Directory containing the AI Workbench binaries

  • install/: Directory containing metadata files tracking installation/update operations and used by uninstall processes to only remove dependencies AI Workbench has installed

  • integrations/: Directory containing metadata files for integrations

  • logs/: Directory containing server, CLI, and proxy logs

  • proc/: Directory containing pidfiles and log files for background processes started by AI Workbench (e.g. server, SSH tunnels)

  • project-runtime-info: Directory containing files to track progress and output during the build and configuration stages <reference-projects-spec-runtime-files-link>

  • traefik-configs: Directory where dynamic configuration files are written for the reverse proxy <reference-proxy-dynamic-config-link>

  • cli.yaml: The CLI configuration file

  • config.yaml: The server configuration file

  • contexts.json: A file containing metadata about Contexts/Locations configured on this host

  • inventory.json: A file containing metadata about the Projects on this host

  • traefik.yaml: The reverse proxy static configuration file

Previous Reference Overview
Next Locations (Contexts)
© Copyright © 2024, NVIDIA Corporation. Last updated on Apr 29, 2024.