Skip to main content
country_code
Ctrl+K
NVIDIA AI Enterprise - Home NVIDIA AI Enterprise - Home

NVIDIA AI Enterprise

  • Documentation Home
NVIDIA AI Enterprise - Home NVIDIA AI Enterprise - Home

NVIDIA AI Enterprise

  • Documentation Home

Table of Contents

Getting Started

  • Release Notes
    • 8.0 (Latest)
  • Quick Start Guide
  • Infrastructure Support Matrix
    • 8.0 (Latest)

Infrastructure Software

  • NVIDIA AI Enterprise and NVIDIA vGPU for Compute
    • NVIDIA vGPU for Compute Overview
    • NVIDIA vGPU for Compute Features
      • MIG-Backed vGPU
      • Device Groups
      • GPUDirect RDMA and GPUDirect Storage
      • Heterogeneous vGPU
      • Live Migration
      • Multi-vGPU and P2P
      • NVIDIA NVSwitch
      • NVLink Multicast
      • Scheduling Policies
      • Suspend-Resume
      • Unified Virtual Memory (UVM)
    • NVIDIA vGPU for Compute Installation
    • NVIDIA vGPU for Compute Licensing
    • NVIDIA vGPU for Compute Configuration
    • NVIDIA vGPU Types Reference
      • Blackwell Architecture vGPU Types
      • Hopper Architecture vGPU Types
      • Ada Lovelace Architecture vGPU Types
      • Ampere Architecture vGPU Types
      • Turing Architecture vGPU Types
      • Volta Architecture vGPU Types

Resources

  • Glossary
  • NVIDIA AI Enterprise and NVIDIA vGPU for Compute
  • NVIDIA vGPU for Compute Features
  • NVLink Multicast
Is this page helpful?

NVLink Multicast#

NVLink multicast support requires that unified memory is enabled. For more information about enabling unified memory, refer to the Enabling Unified Memory for a vGPU documentation.

vGPU Support for NVLink Multicast#

Only full-sized, time-sliced NVIDIA vGPU for Compute support NVLink multicast.

Table 39 NVIDIA NVLink Multicast Support#

Board

vGPU

NVIDIA HGX B300

NVIDIA B300X-279C

NVIDIA HGX B200

NVIDIA B200X-180C

NVIDIA HGX H800

NVIDIA H800XM-80C

NVIDIA HGX H200

NVIDIA H200X-141C

NVIDIA HGX H100

NVIDIA H100XM-80C

NVIDIA H20 HGX 141 GB

H20X-141C

HGX H20 96 GB

NVIDIA H20-96C

Limitations#

  • NVLink multicast is supported only with full-sized, time-sliced NVIDIA vGPU for Compute profiles. MIG-backed vGPU profiles and fractional time-sliced profiles are not supported.

  • Unified memory must be enabled for the vGPU. Refer to the Enabling Unified Memory for a vGPU documentation.

  • NVLink multicast requires NVSwitch-connected GPU boards (HGX platforms). PCIe-only GPU configurations do not support NVLink multicast.

  • Only the specific board and vGPU profile combinations listed in the support table above are supported.

Related Topics#

  • NVSwitch — NVLink fabric interconnect required for multicast

  • Unified Virtual Memory — prerequisite for NVLink multicast

  • Multi-vGPU and Peer-to-Peer — multi-GPU VM configurations that benefit from multicast

  • Virtual GPU Types for Supported GPUs — full-sized vGPU profiles required for multicast support

previous

NVIDIA NVSwitch

next

Scheduling Policies

On this page
  • vGPU Support for NVLink Multicast
  • Limitations
  • Related Topics
NVIDIA NVIDIA
Privacy Policy | Your Privacy Choices | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact

Copyright © 2021-2026, NVIDIA Corporation.

Last updated on Apr 10, 2026.