DGL Release 24.04
This DGL container release is intended for use on the NVIDIA® Ampere Architecture GPU, NVIDIA A100, and the associated NVIDIA CUDA® 12 and NVIDIA cuDNN 8 libraries..
Contents of the DGL container
This container image contains the complete source of the version of DGL in /opt/dgl/dgl-source
. It is pre-built and installed as a system Pyton module.
The container includes the following:
- DGL 2.1+7c51cd16 (including DGL-Graphbolt, a recently released GNN dataloader library which has achieved state-of-the-art performance on NVIDIA GPUs).
- RAPIDS 24.02
- This container also contains WholeGraph 24.02 with NVSHMEM support. WholeGraph is a part of the NVIDIA RAPIDS library which provides an underlying graph storage structure to enhance GNN training, especially optimized for NVIDIA hardware.
- NVIDIA CUDA® 12.4.1
- NVIDIA cuBLAS 12.4.5.8
- NVIDIA cuDNN 9.1.0.70
- NVIDIA NCCL 2.21.5
- Apex
- rdma-core 39.0
- NVIDIA HPC-X 2.18
- OpenMPI 4.1.4+
- GDRCopy 2.3
- TensorBoard 2.12.0
- Nsight Compute 2024.1.0.13
- Nsight Systems 2024.2.1.38
- NVIDIA TensorRT™ 8.6.3
- Torch-TensorRT 2.30.a0
- NVIDIA DALI® 1.36
- MAGMA 2.6.2
- JupyterLab 2.3.2 including Jupyter-TensorBoard
- PyTorch quantization wheel v2.1.2
- TransformerEngine v1.5
- NVSHMEM 2.10.1
GPU Requirements
Release 24.04 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, and NVIDIA Hopper™ architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
Key Features and Enhancements
This DGL release includes the following key features and enhancements.
- In this release of the NVIDIA DGL container, NVIDIA enhances support for distributed feature gathering by integrating NVSHMEM, further improving on the feature fetching performance for distributed GNN tasks. Check out the examples located at:
/workspace/examples/wholegraph-examples
- Add NVIDIA Synthetic Graph Generation tool for generating graphs with an arbitrary size, including node and edge tabular features.
The major features of the release can be found in the DGL release notes.
Announcements
NVIDIA DGL Container Versions
The following table shows what versions of Ubuntu, CUDA, DGL, and TensorRT are supported in each NVIDIA containers for DGL. For older container versions, refer to the Frameworks Support Matrix.
Container Version | Ubuntu | CUDA Toolkit | DGL | PyTorch |
---|---|---|---|---|
24.04 | 22.04 | NVIDIA CUDA 12.4.1 | 2.1+e1f7738 | 24.04 |
24.03 | NVIDIA CUDA 12.4.0.41 | 2.1+7c51cd16 | 24.03 | |
24.01 | NVIDIA CUDA 12.3.2 | 1.2+c660f5c | 24.01 | |
23.11 | NVIDIA CUDA 12.3.0 | 1.1.2 | 23.11 | |
23.09 | NVIDIA CUDA 12.2.1 | 1.1.2 | 23.09 | |
23.07 | NVIDIA CUDA 12.1.1 | 1.1.1 | 23.07 |
Known Issues
- When cpu sampling is enabled (
use_uva=False and num_workers>0
), DGL sampling process would initialize cuda instance (issue-6561), which could result in a segmentation fault with the current cuda driver in the container. -
The tensors that are used as node features must be contiguous and cannot be views of other tensors when the
use_uva
flag is set toTrue
in thedgl.dataloading.Dataloader
class.When you attempt to use a graph with a non-contiguous or view tensors for edata or ndata, a
DGLError
will occur.