NVIDIA Optimized Frameworks
NVIDIA Optimized Frameworks (Latest Release) Download PDF

PyG Release 23.11

This PyG container release is intended for use on the NVIDIA® Ampere Architecture GPU, NVIDIA A100, and the associated NVIDIA CUDA® 12 and NVIDIA cuDNN 8 libraries.

Driver Requirements

Release 23.11 is based on CUDA 12.2.2, which requires NVIDIA Driver release 535 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 450.51 (or later R450), 470.57 (or later R470), 510.47 (or later R510), or 525.85 (or later R525), or 535.86 (or later R535). The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R460, and R520 drivers, which are not forward-compatible with CUDA 12.2. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

Contents of the PyG container

This container image includes the complete source of the NVIDIA version of PyG in /opt/pyg/pytorch_geometric. It is prebuilt and installed as a system Python module. The /workspace/examples folder is copied from /opt/pyg/pytorch_geometric/examples for users starting to run PyG. For example, to run the gcn.py example:

Copy
Copied!
            

/workspace/examples# python gcn.py


See /workspace/README.md for details. The container also includes the following:

GPU Requirements

Release 23.11 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, and NVIDIA Hopper™ architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This PyG release includes the following key features and enhancements.

  • Torch-frame integration.
  • torch.compile accelerations: We recommend using torch.compile on your GNN models for accelerating any example.
    • Example: 'model=torch.compile(model)'
  • NVIDIAs syngen tool for synthetic graph data generation. See README.md for details.

Announcements

General availability starts from 23.11.

NVIDIA PyG Container Versions

The following table shows what versions of Ubuntu, CUDA, PyG (PyTorch Geometric), and PyTorch are supported in each of the NVIDIA containers for PyG.

Container Version Ubuntu CUDA Toolkit PyG PyTorch
23.11 22.04 NVIDIA CUDA 12.3.0 2.4.0 23.11
23.01 20.04 NVIDIA CUDA 12.0.1 2.2.0 23.01

Examples

There is an extensive suite of examples provided by PyG stored at /workspace/examples.

To start, the most basic example is gcn.py. For examples on basic multi-GPU and multi-node usage, see multi_gpu/distributed_sampling.py and multi_gpu/distributed_sampling_multinode.py. For guidance on scaling up to larger data try ogbn_papers_100m.py from the examples folder. To scale this up to use all of the GPUs on a single node, try multi_gpu/singlenode_multigpu_papers100m_gcn.py. To scale further to multi-node, run multi_gpu/multinode_multigpu_papers100m_gcn.py using the slurm commands at the top of the file. Additionally, NVIDIA has created a GNN Platform which consists of a high level API for training and deploying end to end GNN workflows. Detailed ipython notebook examples can be found at /workspace/gnn-platform-examples. Additional example GNN Platform workflows can be found at /opt/pyg/gnn-platform/tests.

In order to work with ipython notebooks make sure to launch your docker containers with the --network=host --ipc=host flags in your docker run command. For more details on working with the gnn-platform see README at /opt/pyg/gnn-platform/README.md.

Known Issues

  • On Arm-based systems, PyG’s gdc function encounters numerical errors when tested with test_gdc. Open Issue: #7431.

.

© Copyright 2023, NVIDIA. Last updated on Dec 20, 2023.