User Guide (Latest)
User Guide (Latest)

Hardware

The hardware section is for specifying hardware specific settings the project. Currently supports:

Number of Desired GPUs

How many GPUs to request when starting this project.

Shared Memory

How much shared memory (in MB) should be set when starting the container (i.e the size of /dev/shm)

To use a GPU in a project, you must set the “Number of Desired GPUs” to a value between 1-8. Then, when your project container starts, AI Workbench checks its internal reference of available GPUs on the host. If enough GPUs are available, they are “reserved” and explicitly passed into the container. If there aren’t enough GPUs available, you have a few options:

  • Reduce the number of requested GPUs

  • Stop another project to free up GPUs

  • Start the container without GPUs

When your project stops, any reserved GPUs are released for use by another project.

Important

On Windows, currently due to a driver limitation all GPUs are passed in to all GPU enabled projects. This means that if you have a project that requests 2 GPUs, and you run it on a machine with 4 GPUs, you see all 4 GPUs inside the container.

When creating a new project, if the base environment has CUDA in it, AI Workbench sets the number of desired GPUs to 1 and the Shared Memory to 1GB. You are free to modify these values to fit your needs. If no CUDA in the base environment, these values are 0.

Previous AI Workbench Mounts
Next AI Workbench Container Runtimes
© Copyright © 2024, NVIDIA Corporation. Last updated on Sep 17, 2024.