NVIDIA Deep Learning NCCL Documentation - Last updated May 22, 2020 - Send Feedback -

NVIDIA Collective Communications Library (NCCL)


Release Notes
This document describes the key features, software enhancements and improvements, and known issues for NCCL 2.7.3. The NVIDIA Collective Communications Library (NCCL) (pronounced “Nickel”) is a library of multi-GPU collective communication primitives that are topology-aware and can be easily integrated into applications. Collective communication algorithms employ many processors working in concert to aggregate data. NCCL is not a full-blown parallel programming framework; rather, it is a library focused on accelerating collective communication primitives.
Installation Guide
This NVIDIA Collective Communication Library (NCCL) Installation Guide provides a step-by-step instructions for downloading and installing NCCL 2.7.3.

Training


User Guide
This NCCL 2.7.3 Developer Guide is the reference document for developers who want to use NCCL in their C/C++ application or library. It explains how to use NCCL for inter-GPU communication, details the communication semantics as well as the API. Examples include using NCCL in different contexts such as single process, multiple threads and multiple processes, potentially across different machines. It also explains how NCCL can be used together with MPI.

Licenses


SLA
This document is the Software License Agreement (SLA) for NVIDIA NCCL. The following contains specific license terms and conditions for NVIDIA NCCL. By accepting this agreement, you agree to comply with all the terms and conditions applicable to the specific product(s) included herein.
BSD License
This document is the Berkeley Software Distribution (BSD) license for NVIDIA NCCL. The following contains specific license terms and conditions for NVIDIA NCCL open sourced. By accepting this agreement, you agree to comply with all the terms and conditions applicable to the specific product(s) included herein.