NCCL Release 2.21.5
This is the NCCL 2.21.5 release notes. For previous NCCL release notes, refer to the NCCL Archives.
Compatibility
-
Deep learning framework containers. Refer to the Support Matrix for the supported container version.
-
This NCCL release supports CUDA 11.0, CUDA 12.2, CUDA 12.4, and CUDA 12.5.
Key Features and Enhancements
This NCCL release includes the following key features and enhancements.
-
Added support for user buffer registrations on IB SHARP operations with one GPU per node.
-
Improved support for Multi-node NVLink systems, add NVLink SHARP support and multi-click support.
-
Added support for dynamic GID detection on RoCE.
-
Reduced memory usage when NVLink SHARP is enabled.
-
Improved tuner plugin loading.
-
Added signature to communicator objects to help detect the corruption of communicator objects or invalid communicator pointers.
Fixed Issues
The following issues have been resolved in NCCL 2.21.5:
-
Fixed IB SHARP rail mapping when using split communicators.
-
Fixed mismatch crash during bootstrap due to TCP packet reordering.
-
Fixed hang with heterogeneous systems, causing the crossNic value to be different between nodes.
-
Fixed minCompCap/maxCompCap computation.
Updating the GPG Repository Key
To best ensure the security and reliability of our RPM and Debian package repositories, NVIDIA is updating and rotating the signing keys used by apt, dnf/yum, and zypper package managers beginning on April 27, 2022. Failure to update your repository signing keys will result in package management errors when attempting to access or install NCCL packages. To ensure continued access to the latest NCCL release, please follow the updated NCCL installation guide.