Advanced GPU Configuration (Optional)#
Added in version 2.0.
Compute workloads can benefit from using separate GPU partitions. The flexibility of GPU partitioning allows a single GPU to be shared and used by small, medium, and large-sized workloads. GPU partitions can be a valid option for executing Deep Learning workloads. An example is Deep Learning training and inferencing workflows, which utilize smaller datasets but are highly dependent on the size of the data/model, and users may need to decrease batch sizes.
The following graphic illustrates a GPU partitioning use case where multi-tenant, multiple users are sharing a single A100 (40GB). In this use case, a single A100 can be used for multiple workloads such as Deep Learning training, fine-tuning, inference, Jupiter Notebook, debugging, etc.
One of the new features introduced to vGPU when VM’s are using MIG backed virtual GPU’s is the ability to have different sized (heterogenous) partitioned GPU instances. NVIDIA vGPU software supports MIG GPU instances only with NVIDIA C-series vGPU types on Linux guest operating systems. To support GPU instances with NVIDIA vGPU, a GPU must be configured with MIG mode enabled and GPU instances must be created and configured on the physical GPU.
More details on MIG can be found in the NVIDIA Multi-Instance GPU User Guide.