Applications

This is a high-level conceptual overview of NVIDIA AI Workbench Applications. For a guide to features, see the corresponding How-To topic(s) (Desktop and CLI) and for detailed reference information, see the Deep Dive topics.

  • In AI Workbench, an application is any code, either in the form of a web app, a process, or a native application, that a user can ask AI Workbench to run while working in a project within the project environment.

  • In AI Workbench, the running of the Project container is separated from the running of application code within the container.

  • Applications can be defined and configured in the project spec file. Deep Dive.

  • AI Workbench uses a reverse proxy to allow the user to access multiple web applications running at the same time in the Project container.

  • webapp: an application that exposes an HTTP interface, either an HTML/JS UI or an HTTP based API. For a web application, AI Workbench will get the URL of the application, configure the reverse proxy to allow access to the application, and optionally open the user’s browser with the URL.

  • process: an application that is running in the Project container but doesn’t expose or require any network access. This could be a background service running in the container or a command that is run.

  • native: an application defined in the Project but launched outside of the Project container on the user’s machine. This could be opening VS Code or another editor.

When creating a project from scratch, AI Workbench automatically provisions a couple of applications for the user by default.

JupyterLab

JupyterLab is included by default on many default base containers offered by AI Workbench. This provides a development environment for users to build and test their project code.

Tensorboard

Tensorboard is included by default on many default base containers offered by AI Workbench. This provides a visualization tool for developers to build and test their project code.

Development environments typically rely on or include applications, with JupyterLab and Tensorboard being common examples. Workbench Projects are designed to provide containerized web applications like these.

Installed applications must be properly configured for AI Workbench to manage and provide them. The relevant metadata is in the execution section of spec.yaml.

Some example fields are:

  • Application name, e.g. JupyterLab

  • Start and stop commands

  • A health check command

  • Port that it is available on

For a deep dive into what each field means, visit the Project Spec Deep Dive section.

After the Project container has been started, the user can start and stop applications that have been defined either in the environment base section or application section of the Project Spec file.

AI Workbench then uses a reverse proxy to allow the user to access multiple web app applications running at the same time in the Project container. The reverse proxy gives each application a different URL prefix. When the different application commands are executed the environment variable PROXY_PREFIX is set to the URL prefix given by the reverse proxy. The application commands can use this environment variable to set the prefix path that the application will run on.

For applications that cannot (or don’t want to) set the prefix path that they run on, the application configuration field webapp_options.proxy.trim_prefix in the Project Spec will instruct the reverse proxy to remove the prefix from the path before forwarding the request to the application.

Previous Projects
Next User Interface
© Copyright 2023-2024, NVIDIA. Last updated on Jan 21, 2024.