NVIDIA vGPU for Compute Installation#
Install NVIDIA vGPU for Compute to enable GPU virtualization. The procedure has three phases—host setup, guest VM driver installation, and AI workload deployment—each documented on its own page below.
Installation Overview
Verify Prerequisites: Confirm hardware, BIOS settings, and licensing requirements
Install NGC CLI: Download software from NVIDIA NGC Catalog
Install Virtual GPU Manager: Deploy on hypervisor host (VMware, KVM, Nutanix)
Verify Fabric Manager: Included in the NVIDIA AI Enterprise drivers for HGX multi-GPU configurations
Install vGPU Guest Driver: Deploy in each virtual machine
Configure Licensing: Connect VMs to NVIDIA License System
Refer to the NVIDIA AI Enterprise Product Support Matrix for supported platforms and versions.
Installation Tasks#
Subpage |
Audience |
Use when you need |
|---|---|---|
Hypervisor administrator |
To verify prerequisites and BIOS, install the NGC CLI, deploy the Virtual GPU Manager on the hypervisor, and confirm Fabric Manager on HGX servers. |
|
Guest VM owner |
To install the vGPU Guest Driver in each VM (Ubuntu, Red Hat, Windows, or other Linux), then license the VM and configure profiles. |
|
AI / DS / workload deployer |
To install the NVIDIA GPU Operator (Bash script) and pull NVIDIA AI Enterprise application containers via Docker, Podman, or Cloud Native Stack on licensed Guest VMs. |
Next Steps#
After installing the vGPU Manager and guest drivers:
Configure vGPU for Compute — create vGPU devices (MIG-backed or time-sliced) and assign them to VMs.
License your vGPU VMs — configure the NVIDIA License System so VMs run at full performance.
Install the NVIDIA GPU Operator — deploy the GPU Operator for container workloads on licensed VMs.