This is the cuDNN 8.0.4 release notes. This release includes fixes from the previous cuDNN v8.0.x releases as well as the following additional changes. These release notes are applicable to both cuDNN and JetPack users of cuDNN unless appended specifically with (not applicable for Jetson platforms).
For previous cuDNN documentation, see the cuDNN Archived Documentation.
Key Features and Enhancements
The following features and enhancements have been added to this release:
- GA102 support with improved convolution performance
- Now includes convolution heuristics targeting the NVIDIA GA102 GPU. (not applicable for Jetson platforms)
- RNN API v8 sample
- The new RNN sample illustrating the usage of the new RNN version 8 API has been added. The sample's workflow consists of the several routines to create RNN descriptors, create RNN data descriptors, set up weight space, and compute routines. The sample takes several input parameters which can set up different RNN configurations and input data specifications (data type, cell mode, bias mode etc.).
- RNN functional and performance improvements
- ARM Server Base System Architecture (SBSA)
- Added support for ARM SBSA for Linux.
Compatibility
For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix for 8.x.x.
Limitations
Deprecated Features
The following features are deprecated in cuDNN 8.0.4:
- Support for Ubuntu 18.04 ppc64le builds will be dropped post cuDNN 8.0.4.
Fixed Issues
Known Issues
This is the cuDNN 8.0.3 release notes. This release includes fixes from the previous cuDNN v8.0.x releases as well as the following additional changes. These release notes are applicable to both cuDNN and JetPack users of cuDNN unless appended specifically with (not applicable for Jetson platforms).
For previous cuDNN documentation, see the cuDNN Archived Documentation.
Key Features and Enhancements
-
Documentation for the cuDNN Backend API has been included in this release. Users specify the computational case, set up an execution plan for it, and execute the computation via numerous descriptors. The typical use pattern for a descriptor with attributes consists of the following sequence of API calls:
- cudnnBackendCreateDescriptor() creates a descriptor of a specified type.
- cudnnBackendSetAttribute() sets the values of a settable attribute for the descriptor. All required attributes must be set before the next step.
- cudnnBackendFinalize() finalizes the descriptor.
- cudnnBackendGetAttribute() gets the values of an attribute from a finalized descriptor.
For more information, refer to the cuDNN Backend API section in the cuDNN API Reference.
Compatibility
For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix for 8.x.x.
Limitations
Fixed Issues
Known Issues
This is the cuDNN 8.0.2 release notes and first GA release of cuDNN 8.x. This release includes fixes from the previous cuDNN v8.0.x releases as well as the following additional changes. These release notes are applicable to both cuDNN and JetPack users of cuDNN unless appended specifically with (not applicable for Jetson platforms).
For previous cuDNN documentation, see the cuDNN Archived Documentation.
Key Features and Enhancements
-
The key features mentioned in cuDNN 8.0.1 Preview and 8.0.0 Preview are now GA quality in this release.
-
cudnnRNNBackwardData_v8()
andcudnnRNNBackwardWeights_v8()
are now documented in the cudnn_adv_train.so Library. For a list of functions and data types that were added in this release, see API Changes For cuDNN 8.0.2. -
- TF32 for 3D convolutions and deconvolution performance is significantly better, up to 3.9x, compared to cuDNN 8.0.1.
- TF32 for grouped convolutions on A100 were improved up to 1.5x performance compared to cuDNN 8.0.1 on ResNext convolution layers and up to 3x the performance compared to V100 with cuDNN v7.6. (not applicable for Jetson platforms)
The above performance improvements were measured using only cuDNN operations. The observed performance improvements will depend on a number of factors, such as non-cuDNN operations, kernel run time, and model architecture type.
-
This release includes performance improvements on all architectures for 2D and 3D grouped convolutions compared with version 7.6. Additionally, we improved kernel selection heuristics on several known Deep Learning GitHub Examples (also known as model scripts).
Compatibility
For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix for 8.x.x.
Limitations
Fixed Issues
The following issues have been fixed in this release:
Known Issues
This is the cuDNN 8.0.1 Preview release. This Preview release is for early testing and feedback, therefore, for production use of cuDNN, continue to use cuDNN 7.6.5. This release is subject to change based on ongoing performance tuning and functional testing. For feedback on the new backend API and deprecations, email cudnn@nvidia.com.
These release notes are applicable to JetPack users of cuDNN unless appended specifically with (not applicable for Jetson platforms).
For previous cuDNN documentation, see the cuDNN Archived Documentation.
Key Features and Enhancements
Compatibility
For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix for 8.0.1.
Limitations
Fixed Issues
The following issues have been fixed in this release:
Known Issues
This is the cuDNN 8.0.0 Preview release. This Preview release is for early testing and feedback, therefore, for production use of cuDNN, continue to use cuDNN 7.6.5. This release is subject to change based on ongoing performance tuning and functional testing. For feedback on the new backend API and deprecations, email cudnn@nvidia.com.
These release notes are applicable to JetPack users of cuDNN unless appended specifically with (not applicable for Jetson platforms).
cuDNN 8.0.0 passed GA quality testing and validation for TensorRT and JetPack users.
For previous cuDNN documentation, see the cuDNN Archived Documentation.
Key Features and Enhancements
The following features and enhancements have been added to this release:
- cuDNN library
-
- Multiple dynamic libraries
-
In order to link against a subset of cuDNN, you need to know which subset of the API you are using and then link against the appropriate cuDNN sub-components. The cuDNN sub-components are as follows:
cudnn_ops_infer.so
cudnn_ops_train.so
cudnn_cnn_infer.so
cudnn_cnn_train.so
cudnn_adv_infer.so
cudnn_adv_train.so
- cuDNN linking options
-
There are two different linking options:
- cuDNN loading options
-
For users who want a smaller memory footprint, there are 2 ways of loading the library.
- New API functions
-
For a list of functions and data types that were added in this release, see API Changes For cuDNN 8.0.0.
- General Support of CUDA Graph Capture
-
CUDA Graphs are now supported for all functions in this release; with the following restrictions.
cuDNN 8.0.0 does not at this time offer API support to add operations to an existing CUDA graph directly; however, the captured graph may be added to an existing graph through the existing CUDA Graphs API.
Regarding texture usage, cuDNN 8.0.0 by default will not enable texture usage; expert users may enable texture usage where allowed, but that usage will prevent a successful CUDA Graph capture until disabled. In order for cuDNN 8.0.0 to be graph-capture compatible library-wide, the cuDNN 8.0.0 CTC API was updated as described elsewhere.
The usual restrictions for CUDA Graphs apply in addition to these restrictions here.
- New APIs for convolution
-
A new set of API functions to provide a brand new approach to cuDNN that offers more fine-grain control of performance, numerical properties, etc.. for convolution. Using this API, users directly access various engines that compute convolution forward propagation, backward data, backward filter, and generic support for fusion starting with a limited support in this cuDNN 8.0.0 release and expanding support in follow-up releases. Each engine has performance tuning knobs such as GEMM tiling and split-K. Users can use this API to fine-tune their network by querying cuDNN’s heuristics, or doing their own, to find the most optimal engine configuration with which cuDNN computes each network layer.
- NVIDIA Ampere GPU architecture support (not applicable for Jetson platforms)
-
- Turing and Volta architecture improvements
-
- Operation fusion
-
Operation fusion can be achieved via the backend API. The general workflow is similar to running unfused operations, except that instead of creating a single operation Operation Graph, the user may specify a multi-operation Operation Graph. For more information, see Operation Fusion Via The Backend API in the cuDNN Developer Guide.
- Depthwise convolution extension
-
We’ve extended the
fprop
anddgrad
NHWC depthwise kernels to support more combinations (filter sizes/strides) such as 5x5/1x1, 5x5/2x2, 7x7/1x1, 7x7/2x2 (in addition to what we already have, 1x1/1x1, 3x3/1x1, 3x3/2x2), which provides good performance.
Compatibility
For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix for 8.0.0.
Limitations
Deprecated Features
The following features are deprecated in cuDNN 8.0.0:
Fixed Issues
The following issues have been fixed in this release:
Known Issues