Glossary#
This glossary provides definitions for key terms and concepts used throughout NVIDIA AI Workbench.
- API Key#
A unique identifier used to authenticate and authorize access to an API. In AI Workbench, API keys are used for integrations with services like NGC and other platforms. Also called Personal Access Token (PAT) in some contexts.
- Application#
Software that runs within a project container. Applications can be web apps (like JupyterLab or TensorBoard), processes, native apps, or multi-container compose applications. Each application is configured with start/stop commands and other management options.
- apt.txt#
A file in the project root directory that specifies system packages to be installed via the
apt
package manager during container build. Each package is listed on a separate line.- Base Environment#
The foundation container image used as the starting point for a project container. Base environments can be NVIDIA-provided default containers or custom containers that meet AI Workbench requirements.
- Base Image#
The container image that serves as the foundation for building a project container. It contains the operating system, runtime environment, and pre-installed software packages.
- Brev#
An NVIDIA cloud service integration that provides on-demand GPU instances. Brev can be configured as a remote location in AI Workbench for cloud-based development.
- BYOC (Bring Your Own Container)#
The ability to use a custom container image as the base environment for an AI Workbench project, provided it meets the technical requirements including proper image labels and Debian-based OS.
- CDI (Container Device Interface)#
A specification used by NVIDIA Container Toolkit to provide GPU access to containers when using Podman on Linux and Windows/WSL.
- CLI (Command Line Interface)#
A text-based interface for interacting with AI Workbench. The CLI provides all the functionality of the Desktop App and can be used for scripting and automation. Invoked using the
nvwb
command.- Compose Application#
A multi-container application defined by a Docker Compose file. These applications run multiple services in separate containers that can communicate with each other.
- Compose File#
A YAML file (typically
compose.yaml
ordocker-compose.yaml
) that defines the services, networks, and volumes for a multi-container application.- Container#
An isolated, lightweight, portable software package that includes everything needed to run an application: code, runtime, system tools, libraries, and settings.
- Container Registry#
A repository service for storing and distributing container images. Examples include NVIDIA NGC, Docker Hub, GitHub Container Registry, and GitLab Container Registry.
- Container Runtime#
The software responsible for running containers. AI Workbench supports Docker and Podman as container runtimes.
- Context#
In the CLI, a context refers to a location (local or remote) where AI Workbench is installed. Users activate a context to work with projects in that location.
- Credential Manager#
A component of AI Workbench that securely stores and manages authentication credentials for integrations. It integrates with system keyrings and handles OAuth flows.
- CUDA#
NVIDIA’s parallel computing platform and programming model. Different versions of CUDA are supported in various base environments for GPU-accelerated computing.
- Deep Link#
A URL that points to a specific resource within AI Workbench, such as a project or application, allowing direct navigation and sharing.
- Desktop App#
The graphical user interface (GUI) for AI Workbench, installed locally on Windows (with WSL), macOS, or Ubuntu. It provides the primary interface for managing locations, projects, and applications.
- Development Environment#
The containerized environment where code development and execution takes place. Each project has its own isolated development environment.
- Docker#
A containerization platform used by AI Workbench to build and run containers. Required on macOS and Windows (Docker Desktop), or available as Docker Engine on Ubuntu.
- Environment Variables#
Key-value pairs that configure software behavior within containers. AI Workbench supports both non-sensitive variables (stored in
variables.env
) and sensitive variables (for secrets like API keys).- Git LFS (Large File Storage)#
A Git extension for versioning large files efficiently. AI Workbench automatically configures certain directories (like
data/
andmodels/
) to use Git LFS.- GitHub#
A web-based Git repository hosting service. AI Workbench provides integration with GitHub for project collaboration and version control.
- GitLab#
A web-based Git repository hosting service. AI Workbench supports both GitLab.com and self-hosted GitLab instances.
- GPU (Graphics Processing Unit)#
A specialized processor designed for parallel computation, commonly used for AI and machine learning workloads. AI Workbench can configure projects to use one or more GPUs.
- GraphQL API#
The primary API exposed by the AI Workbench Service for client communication. Available at
http://localhost:10001
by default.- Host Mount#
A mount type that shares an existing directory from the host machine with the project container, allowing file access between host and container.
- Integration#
A configured connection between AI Workbench and external services like GitHub, GitLab, NGC, or Brev. Integrations handle authentication and API access.
- JupyterLab#
A web-based interactive development environment commonly used for data science and machine learning. Often pre-installed in AI Workbench base environments.
- Location#
Any system where AI Workbench is installed and can run projects. Includes local systems (with Desktop App) and remote systems (with CLI). Previously called “contexts” in some documentation.
- Local Workbench#
The AI Workbench Desktop App installed on your local machine, which serves as the primary user interface for managing all locations.
- Mount#
A mechanism for sharing file systems between the host and container. AI Workbench supports project mounts, host mounts, volume mounts, and temp mounts.
- Multi-Container Environment#
A development environment that uses multiple containers working together, typically defined by a Docker Compose file.
- Native App#
An application that runs on the host system (outside the container) but has access to project files and containers. VS Code is an example of a native app.
- NGC (NVIDIA GPU Cloud)#
NVIDIA’s cloud platform for GPU-optimized software. AI Workbench can pull base images from NGC and requires an API key for accessing private resources.
- NVIDIA Container Toolkit#
Software that enables containers to use NVIDIA GPUs. Required for GPU acceleration in AI Workbench projects.
- OAuth#
An authorization protocol used for secure authentication with external services. Many AI Workbench integrations support OAuth flows.
- PAT (Personal Access Token)#
A token-based authentication method used with Git hosting services and other APIs as an alternative to OAuth.
- Podman#
An alternative container runtime to Docker, supported by AI Workbench. Runs in rootless mode for better security and isolation.
- postBuild.bash#
A script that runs after packages are installed during container build. Executes as the container user with passwordless sudo access.
- preBuild.bash#
A script that runs before packages are installed during container build. Executes as the container user with passwordless sudo access.
- Project#
A Git repository enhanced with AI Workbench metadata that defines a complete development environment. Contains code, data, models, environment configuration, and version history.
- Project Container#
The main container associated with each project, built from a base image and customized with project-specific packages and configuration.
- Project Mount#
The default mount that maps the project repository from the host into the container at
/project/
. Created automatically for every project.- Project Specification#
The metadata file (
.project/spec.yaml
) that defines a project’s environment, applications, runtime configuration, and other settings.- Remote Location#
A system other than your local machine where AI Workbench CLI is installed and can run projects. Accessed via SSH from the local Desktop App.
- Remote Workbench#
AI Workbench CLI installed on a remote system, providing additional compute resources while being managed from the local Desktop App.
- requirements.txt#
A file in the project root directory that specifies Python packages to be installed via
pip
during container build. Each package is listed on a separate line.- Reverse Proxy#
A service that routes network requests to appropriate destinations. AI Workbench uses Traefik as a reverse proxy to handle application access.
- Service#
The AI Workbench server component that runs as a single binary and exposes the GraphQL API. Handles project management, container operations, and client communication.
Memory that can be accessed by multiple processes. Configurable in AI Workbench projects for applications that require inter-process communication.
- spec.yaml#
The project specification file located at
.project/spec.yaml
that contains metadata defining the project’s environment, applications, and runtime configuration.- SSH (Secure Shell)#
A protocol used for secure remote access to systems. AI Workbench uses SSH tunnels to connect local clients to remote locations.
- SSH Tunnel#
An encrypted connection that allows secure communication between local and remote AI Workbench instances.
- TensorBoard#
A web application for visualizing machine learning experiments and metrics. Often included in AI Workbench base environments.
- Temp Mount#
A mount type that creates temporary storage in the container, reset every time the container starts.
- variables.env#
A file containing non-sensitive environment variables as key-value pairs. Variables are set in the container at runtime but not built into the container image.
- Volume Mount#
A mount type that creates persistent storage managed by the container runtime, surviving container restarts and rebuilds.
- VS Code#
Visual Studio Code, a popular code editor that can be configured as a native application in AI Workbench projects.
- Web App#
An application that provides a web interface accessible through a browser. Examples include JupyterLab, TensorBoard, and custom web applications.
- Workbench Directory#
The working directory used by AI Workbench (default:
$HOME/.nvwb
) containing binaries, logs, project metadata, and configuration files.- WSL (Windows Subsystem for Linux)#
A Windows feature that allows running Linux environments. Required for AI Workbench installation on Windows, specifically the Ubuntu distribution.