User Guide (Latest)

The hardware section is for specifying hardware specific settings the Project. Currently supports:

Number of Desired GPUs

How many GPUs to request when starting this project.

Shared Memory

How much shared memory (in MB) should be set when starting the container (i.e the size of /dev/shm)

To use a GPU in a Project, you must set the “Number of Desired GPUs” to a value between 1-8. Then, when your project container starts, AI Workbench will check its internal reference of available GPUs on the host. If enough GPUs are available, they will be “reserved” and explicitly passed into the container. If there aren’t enough GPUs available, you have a few options:

  • Reduce the number of requested GPUs

  • Stop another project to free up GPUs

  • Start the container without GPUs

When your Project stops, any reserved GPUs will be released for use by another project.


On Windows, currently due to a driver limitation all GPUs will be passed in to all GPU enabled Projects. This means that if you have a Project requesting 2 GPUs and you run it on a machine with 4 GPUs, you will see all 4 inside the container.

When creating a new Project, if the base environment has CUDA in it, AI Workbench will set the number of desired GPUs to 1 and the Shared Memory to 1GB. You are free to modify these values to fit your needs. If no CUDA in the base environment, these values will be 0.

Previous Mounts
Next Base Environments
© Copyright © 2024, NVIDIA Corporation. Last updated on Jun 10, 2024.