NCCL Release 2.20.3
This is the NCCL 2.20.3 release notes. For previous NCCL release notes, refer to the NCCL Archives.
Compatibility
-
Deep learning framework containers. Refer to the Support Matrix for the supported container version.
-
This NCCL release supports CUDA 11.0, CUDA 12.2, and CUDA 12.3.
Key Features and Enhancements
This NCCL release includes the following key features and enhancements.
-
Improved ring algorithm (alternating rings), to remove bottleneck on systems where each GPU only has one local NIC, like the DGX H100.
-
Added support for user buffer registration for network send/recv operations.
-
Optimized aggregated operations to better utilize all channels.
-
Added support for large Broadcom PCI gen5 switches, flattening the reported two-level topology.
-
Added support for inter-node NVLink communication.
-
Add support for port fusion in NET/IB.
-
Added support for ReduceScatter and AllGather using IB SHARP.
Fixed Issues
The following issues have been resolved in NCCL 2.20.3:
-
Fixed hang during A2A connection.
Updating the GPG Repository Key
To best ensure the security and reliability of our RPM and Debian package repositories, NVIDIA is updating and rotating the signing keys used by apt, dnf/yum, and zypper package managers beginning on April 27, 2022. Failure to update your repository signing keys will result in package management errors when attempting to access or install NCCL packages. To ensure continued access to the latest NCCL release, please follow the updated NCCL installation guide.