NVIDIA Virtual PC (vPC): Sizing and GPU Selection Guide for Virtualized Workloads

Selecting the Right NVIDIA GPU Virtualization Software

NVIDIA GPU virtualization software products are optimized for different classes of workload. Therefore, you should select the right NVIDIA GPU virtualization software product on the basis of the workloads that your users are running.

This table summarizes the differences in features between the various NVIDIA GPU virtualization software products.

Table 4 NVIDIA GPU Virtualization Software Feature Comparison

Feature

NVIDIA vPC

NVIDIA RTX vWS

Configuration and Deployment
Microsoft Windows support
Linux distributions support
NVIDIA graphics driver
NVIDIA RTX Enterprise driver
Multiple vGPUs per VM
NVLink
ECC reporting and handling
Page retirement
Display
Maximum hardware-rendered display
  • Four QHD
  • Two 4K
  • One 5K
  • Four 5K
  • Two 8K
Maximum resolution 5120×2880 7680×4320
Maximum pixel count 17,694,720 66,355,200
Advanced Professional Features
ISV certifications
NVIDIA CUDA Toolkit / OpenCL support
Graphics Features and APIs
NVENC
OpenGL extensions (WebGL)
RTX platform optimizations
DirectX
Vulkan
Note

For detailed information about vGPU licensing, refer to the NVIDIA Virtual GPU Packaging, Pricing, and Licensing Guide.

Each NVIDIA GPU virtualization software product is designed for a specific class of workload.

NVIDIA RTX Virtual Workstation

NVIDIA RTX Virtual Workstation (RTX vWS) software is designed for professional graphics workloads that benefit from the following NVIDIA RTX vWS features:

  • RTX Enterprise platform drivers and ISV certifications

  • Support for NVIDIA® CUDA® Toolkit and OpenCL

  • Higher resolution displays

  • vGPU profiles with larger amounts of frame buffer

NVIDIA RTX vWS accelerates professional design and visualization applications such as:

  • Autodesk Revit

  • Dassault Systèmes CATIA

  • Esri ArcGIS Pro

  • Autodesk Maya

  • SLB Petrel

  • Dassault Systèmes Solidworks

NVIDIA Virtual PC

NVIDIA Virtual PC (vPC) software is designed for knowledge worker VDI workloads to accelerate the following software and peripheral devices:

  • Office productivity applications

  • Streaming video

  • The Microsoft Windows and Linux distributions

  • Multiple monitors

  • High-resolution monitors

  • 2D electric design automation (EDA)

NVIDIA vGPU software enables multiple virtual machines (VMs) to share a single physical GPU. This improves overall GPU utilization, but the way resources are shared depends on the underlying virtualization technology: either time-sliced vGPU or MIG-backed vGPU.

Time-Sliced vGPU Sharing

With time-sliced vGPU, multiple VMs share GPU access over time. NVIDIA vGPU software uses the best effort scheduler by default, which aims to balance performance across vGPUs.

Scheduling Options for GPU Sharing

To accommodate a variety of Quality of Service (QoS) levels for sharing a GPU, NVIDIA vGPU software provides multiple GPU scheduling options. For more information about these GPU scheduling options, refer to vGPU Schedulers.

MIG-Backed vGPU Sharing

With MIG (Multi-Instance GPU), a single physical GPU is partitioned at the hardware level into multiple fully isolated GPU instances. This provides guaranteed performance isolation between VMs.

Performance Allocation

Unlike time-sliced vGPU, MIG does not rely on time-sharing. Instead, each MIG-backed vGPU is assigned a dedicated slice of GPU resources with its own Streaming Multiprocessors (SMs) and memory subsystem. For example:

  • When four MIG instances are created, each instance delivers consistent and isolated performance to its assigned VM.

  • Within each MIG slice, up to 12 vGPUs can be created and time-sliced within that isolated slice. These vGPUs can be assigned to separate VMs, which continue to benefit from MIG’s hardware-level isolation boundaries.

Previous Selecting the Right NVIDIA GPU for Virtualization
Next Sizing Methodology
© Copyright © 2013-2026, NVIDIA Corporation. Last updated on Jan 28, 2026