What can I help you with?
Comprehensive Knowledge Base on vGPU Features Across Hypervisors

Overview

This document offers a comprehensive overview of virtual GPU (vGPU) features across major hypervisors, empowering customers to compare and understand vGPU capabilities for optimized deployment and performance. As demand for virtualized graphics and compute grows, understanding these features is essential for effective workload management and maximizing performance. The document provides detailed feature explanations and a comparison chart highlighting general vGPU compatibility across different hypervisors. Customers are encouraged to consult directly with the respective vendors for hypervisor-specific inquiries.

The chart below provides a quick reference to vGPU feature compatibility across major hypervisors, helping customers assess which platforms support key vGPU functionalities for their deployment needs.

Table 2 Hypervisor Comparison Chart

vGPU Features /

Supported Hypervisors

Multi -

vGPU 3

(details)

vGPU

Schedulers

(details)

Live

Migration

(details)

Heterogeneous

vGPU 4

(details)

Suspend -

Resume

(details)

Unified

Virtual Memory

(details)

Deep Learning

Super Sampling

(DLSS) 3

(details)

Canonical Ubuntu with KVM 24.04
Citrix XenServer 8 1
Microsoft Azure Local 23H2
Microsoft Windows Server 2025
Red Hat Enterprise Linux with KVM 9.4
VMware vSphere 8.0 Update 3 2
Nutanix AHV 10.0 (see note below the table)
Suse Linux Enterprise Server 15 (see note below the table)
Proxmox VE 8.3 (see note below the table)
Important

Feature support for Nutanix AHV, Suse Linux Enterprise Server (SLES), and Proxmox VE should be confirmed through the official documentation provided by these vendors for accurate and up-to-date information. These platforms do not have a dedicated NVIDIA vGPU product. Instead, they use NVIDIA’s generic KVM vGPU release, with support and integration handled by their respective vendors.

Note

Starting with Windows Server 2025 Hyper-V, vGPU support is introduced through GPU Partitioning (GPU-P), allowing multiple VMs to share a single GPU. All previous versions of Windows Server Hyper-V do not support vGPU. Instead, they only allow GPU access through Discrete Device Assignment (DDA), which dedicates an entire GPU to a single VM.

Footnotes

[1]

Linux VMs are not supported with any vGPU live migration features

[2]

A VMware subscription that includes ESXi is required

[3](1,2)

Requires “Q” Profiles

[4]

Volta or later GPUs are required for heterogeneous vGPU

Only the best effort and equal share schedulers are supported. The fixed share scheduler is not supported.

Previous Comprehensive Knowledge Base on vGPU Features Across Hypervisors
Next vGPU Features
© Copyright © 2013-2025, NVIDIA Corporation. Last updated on Mar 5, 2025.