Abstract
This cuDNN 8.2.1 Developer Guide provides an overview of cuDNN features such as customizable data layouts, supporting flexible dimension ordering, striding, and subregions for the 4D tensors used as inputs and outputs to all of its routines. This flexibility allows easy integration into any neural network implementation.
To access the cuDNN API Reference, refer to the cuDNN API Reference Guide.
For previously released cuDNN developer documentation, see cuDNN Archives.
1. Overview
cuDNN convolution routines aim for a performance that is competitive with the fastest GEMM (matrix multiply)-based implementations of such routines while using significantly less memory.
cuDNN features include customizable data layouts, supporting flexible dimension ordering, striding, and subregions for the 4D tensors used as inputs and outputs to all of its routines. This flexibility allows easy integration into any neural network implementation and avoids the input/output transposition steps sometimes necessary with GEMM-based convolutions.
cuDNN offers a context-based API that allows for easy multithreading and (optional) interoperability with NVIDIA® CUDA® streams.
2. Programming Model
An application using cuDNN must initialize a handle to the library context by calling cudnnCreate(). This handle is explicitly passed to every subsequent library function that operates on GPU data. Once the application finishes using cuDNN, it can release the resources associated with the library handle using cudnnDestroy(). This approach allows the user to explicitly control the library's functioning when using multiple host threads, GPUs and CUDA Streams.
For example, an application can use cudaSetDevice to associate different devices with different host threads, and in each of those host threads, use a unique cuDNN handle that directs the library calls to the device associated with it. Thus the cuDNN library calls made with different handles will automatically run on different devices.
The device associated with a particular cuDNN context is assumed to remain unchanged between the corresponding cudnnCreate() and cudnnDestroy() calls. In order for the cuDNN library to use a different device within the same host thread, the application must set the new device to be used by calling cudaSetDevice() and then create another cuDNN context, which will be associated with the new device, by calling cudnnCreate().
cuDNN API Compatibility
-
Any patch release x.y.z is forward or backward-compatible with applications built against another cuDNN patch release x.y.w (meaning, of the same major and minor version number, but having w!=z).
-
cuDNN minor releases beginning with cuDNN 7 are binary backward-compatible with applications built against the same or earlier patch release (meaning, an application built against cuDNN 7.x is binary compatible with cuDNN library 7.y, where y>=x).
-
Applications compiled with a cuDNN version 7.y are not guaranteed to work with 7.x release when y > x.
3. Convolution Formulas
Term | Description |
---|---|
Input (image) Tensor | |
Weight Tensor | |
Output Tensor | |
Current Batch Size | |
Current Input Channel | |
Total Input Channels | |
Input Image Height | |
Input Image Width | |
Current Output Channel | |
Total Output Channels | |
Current Output Height Position | |
Current Output Width Position | |
Group Count | |
Padding Value | |
Vertical Subsample Stride (along Height) | |
Horizontal Subsample Stride (along Width) | |
Vertical Dilation (along Height) | |
Horizontal Dilation (along Width) | |
Current Filter Height | |
Total Filter Height | |
Current Filter Width | |
Total Filter Width | |
Normal Convolution (using cross-correlation mode)
Convolution with Padding
Convolution with Subsample-Striding
Convolution with Dilation
Convolution using Convolution Mode
Convolution using Grouped Convolution
4. Notation
In backpropagation routines, the parameters keep their meanings.
5. Tensor Descriptor
The first dimension of the tensor defines the batch size n, and the second dimension defines the number of features maps c. This tensor definition allows, for example, to have some dimensions overlapping each other within the same tensor by having the stride of one dimension smaller than the product of the dimension and the stride of the next dimension. In cuDNN, unless specified otherwise, all routines will support tensors with overlapping dimensions for forward-pass input tensors, however, dimensions of the output tensors cannot overlap. Even though this tensor format supports negative strides (which can be useful for data mirroring), cuDNN routines do not support tensors with negative strides unless specified otherwise.
5.1. WXYZ Tensor Descriptor
-
all the strides are strictly positive
-
the dimensions referenced by the letters are sorted in decreasing order of their respective strides
5.2. 4-D Tensor Descriptor
- NCHW
- NHWC
- CHWN
5.3. 5-D Tensor Description
- NCDHW
- NDHWC
- CDHWN
5.4. Fully-packed Tensors
-
the number of tensor dimensions is equal to the number of letters preceding the fully-packed suffix.
-
the stride of the i-th dimension is equal to the product of the (i+1)-th dimension by the (i+1)-th stride.
-
the stride of the last dimension is 1.
5.5. Partially-packed Tensors
-
The strides of all dimensions NOT referenced in the -packed suffix are greater or equal to the product of the next dimension by the next stride.
-
The stride of each dimension referenced in the -packed suffix in position i is equal to the product of the (i+1)-st dimension by the (i+1)-st stride.
-
If the last tensor's dimension is present in the -packed suffix, its stride is 1.
For example, an NHWC tensor WC-packed means that the c_stride is equal to 1 and w_stride is equal to c_dim x c_stride. In practice, the -packed suffix is usually applied to the minor dimensions of a tensor but can be applied to only the major dimensions; for example, an NCHW tensor that is only N-packed.
5.6. Spatially Packed Tensors
5.7. Overlapping Tensors
6. Data Layout Formats
6.1. Data Layout Example
- N is the batch size; 1.
- C is the number of feature maps (i.e., number of channels); 64.
- H is the image height; 5.
- W is the image width; 4.
To keep the example simple, the image pixel elements are expressed as a sequence of integers, 0, 1, 2, 3, and so on. See Figure 1.
6.2. NCHW Memory Layout
- Beginning with the first channel (c=0), the elements are arranged contiguously in row-major order.
- Continue with second and subsequent channels until the elements of all the channels are laid out. See Figure 2.
- Proceed to the next batch (if N is > 1).
6.3. NHWC Memory Layout
- Begin with the first element of channel 0, then proceed to the first element of channel 1, and so on, until the first elements of all the C channels are laid out.
- Next, select the second element of channel 0, then proceed to the second element of channel 1, and so on, until the second element of all the channels are laid out.
- Follow the row-major order of channel 0 and complete all the elements. See Figure 3.
- Proceed to the next batch (if N is > 1).
6.4. NC/32HW32 Memory Layout
7. Thread Safety
When creating a per-thread cuDNN handle, it is recommended that a single synchronous call of cudnnCreate() be made first before each thread creates its own handle asynchronously to avoid serial behavior.
8. Reproducibility (determinism)
9. Scaling Parameters
dstValue = alpha*computedValue + beta*priorDstValue
When beta is zero, the output is not read and may contain uninitialized data (including NaN).
- float for HALF and FLOAT tensors, and
- double for DOUBLE tensors.
Type Conversion
When the data input x, the filter input w and the output y are all in INT8 data type, the function cudnnConvolutionBiasActivationForward() will perform the type conversion as shown in Figure 6:
10. Tensor Core Operations
10.1. Basics
The default math mode is CUDNN_DEFAULT_MATH, which indicates that the Tensor Core operations will be avoided by the library. Because the CUDNN_TENSOR_OP_MATH mode uses the Tensor Cores, it is possible that these two modes generate slightly different numerical results due to different sequencing of the floating-point operations.
For example, the result of multiplying two matrices using Tensor Core operations is very close, but not always identical, to the result achieved using a sequence of scalar floating-point operations. For this reason, the cuDNN library requires an explicit user opt-in before enabling the use of Tensor Core operations.
However, experiments with training common deep learning models show negligible differences between using Tensor Core operations and scalar floating point paths, as measured by both the final network accuracy and the iteration count to convergence. Consequently, the cuDNN library treats both modes of operation as functionally indistinguishable and allows for the scalar paths to serve as legitimate fallbacks for cases in which the use of Tensor Core operations is unsuitable.
Kernels using Tensor Core operations are available for both convolutions and RNNs.
See also Training with Mixed Precision.
10.2. Convolution Functions
10.2.1. Prerequisites
10.2.2. Supported Algorithms
Supported Convolution Function | Supported Algos |
---|---|
cudnnConvolutionForward |
CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED |
cudnnConvolutionBackwardData |
CUDNN_CONVOLUTION_BWD_DATA_ALGO_1 CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED |
cudnnConvolutionBackwardFilter |
CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1 CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED |
10.2.3. Data And Filter Formats
10.3. RNN Functions
10.3.1. Prerequisites
10.3.2. Supported Algorithms
10.3.3. Data And Filter Formats
See also Features Of RNN Functions.
10.4. Tensor Transformations
10.4.1. FP16 Data
10.4.2. FP32-to-FP16 Conversion
For Convolutions
// Set the math type to allow cuDNN to use Tensor Cores: checkCudnnErr(cudnnSetConvolutionMathType(cudnnConvDesc, CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION));
For RNNs
// Set the math type to allow cuDNN to use Tensor Cores: checkCudnnErr(cudnnSetRNNMatrixMathType(cudnnRnnDesc, CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION));
10.4.3. Padding
// Set NCHW Tensor dimensions, not necessarily as multiples of eight (only the input tensor is shown here): int dimA[] = {1, 7, 32, 32}; int strideA[] = {7168, 1024, 32, 1};
10.4.4. Folding
Folding enables the input tensors to be transformed into a format that the Tensor Cores support (i.e., no strides).
10.4.5. Conversion Between NCHW And NHWC
If your input (and output) are NCHW, then expect a layout change.
Non-Tensor Op convolutions will not perform conversions between NCHW and NHWC.
In very rare and difficult-to-qualify cases that are a complex function of padding and filter sizes, it is possible that Tensor Ops is not enabled. In such cases, users should pre-pad.
10.5. Guidelines For A Deep Learning Compiler
-
Make sure that the convolution operation is eligible for Tensor Cores by avoiding any combinations of large padding and large filters.
-
Transform the inputs and filters to NHWC, pre-pad channel and batch size to be a multiple of 8.
-
Make sure that all user-provided tensors, workspace, and reserve space are aligned to 128-bit boundaries.
11. GPU And Driver Requirements
12. Backward Compatibility And Deprecation Policy
The old deprecation policy required three major library releases to complete an API update. During this process, the original function name was first assigned to the legacy API, and then to the revised API, depending on the library version. The user wishing to migrate to the new API version had to update his or her code twice. In the first update, the original call foo() had to be changed to foo_vN(), where N is the new major cuDNN version. After the next major cuDNN release, the foo_vN() function had to be renamed back as foo(). Clearly, the above process could be difficult for code maintenance, especially when many functions are upgraded.
cuDNN version | Explanation |
---|---|
Major release 8 | The updated API is introduced as foo_v8(). The deprecated API foo() is kept unchanged to maintain backward compatibility until the next major release. |
Major release 9 | The deprecated API foo() is permanently removed and its name is not reused. The foo_v8() function supersedes the retired call foo(). |
If the existing API needs to be updated, a new function flavor is introduced with the _v tag followed by the current, major cuDNN version. In the next major release, the deprecated function is removed, and its name is never reused. A brand-new API is first introduced without the _v tag.
The revised depreciation scheme allows us to retire the legacy API in just one major release. Similarly to the previous API deprecation policy, the user is able to compile the legacy code without any changes using the next major release of the cuDNN library. The backward compatibility ends when another major cuDNN release is introduced.
The updated function name embeds the information in which the cuDNN version of the API call was modified. As a result, the API changes will be easier to track and document.
The new deprecation policy is applied also to pending API changes from previous cuDNN releases. For example, according to the old deprecation policy, cudnnSetRNNDescriptor_v6() should be removed in cuDNN version 8 and the upgraded call cudnnSetRNNDescriptor() with the same arguments and behavior should be kept. Instead, the new deprecation policy is applied to this case and the tagged function is kept.
warning: ‘cudnnStatus_t cudnnSetRNNMatrixMathType(cudnnRNNDescriptor_t, cudnnMathType_t)’ is deprecated [-Wdeprecated-declarations]Or
warning C4996: 'cudnnSetRNNMatrixMathType': was declared deprecated
The above warnings are disabled by default to avoid potential build breaks in software setups where compiler warnings are treated as errors.
Note that the simple swapping of older cuDNN version 7 shared library files will not work with the cuDNN version 8 release. The user source code needs to be recompiled from scratch with the cuDNN version 8 headers and linked with the version 8 libraries.
13. Grouped Convolutions
Basic Idea
Conceptually, in grouped convolutions, the input channels and the filter channels are split into a groupCount number of independent groups, with each group having a reduced number of channels. The convolution operation is then performed separately on these input and filter groups.
For example, consider the following: if the number of input channels is 4, and the number of filter channels of 12. For a normal, ungrouped convolution, the number of computation operations performed are 12*4.
If the groupCount is set to 2, then there are now two input channel groups of two input channels each, and two filter channel groups of six filter channels each.
As a result, each grouped convolution will now perform 2*6 computation operations, and two such grouped convolutions are performed. Hence the computation savings are 2x: (12*4)/(2*(2*6)) .
14. API Logging
The log output contains variable names, data types, parameter values, device pointers, process ID, thread ID, cuDNN handle, CUDA stream ID, and metadata such as time of the function call in microseconds.
When logging is enabled, the log output will be handled by the built-in default callback function. The user may also write their own callback function, and use the cudnnSetCallback() to pass in the function pointer of their own callback function. The following is a sample output of the API log.
Function cudnnSetActivationDescriptor() called:
mode: type=cudnnActivationMode_t; val=CUDNN_ACTIVATION_RELU (1);
reluNanOpt: type=cudnnNanPropagation_t; val=CUDNN_NOT_PROPAGATE_NAN (0);
coef: type=double; val=1000.000000;
Time: 2017-11-21T14:14:21.366171 (0d+0h+1m+5s since start)
Process: 21264, Thread: 21264, cudnn_handle: NULL, cudnn_stream: NULL.
There are two methods to enable API logging.
Method 1: Using Environment Variables
See also Table 3 for the impact on the performance of API logging using environment variables.
Environment variables | CUDNN_LOGINFO_DBG=0 | CUDNN_LOGINFO_DBG=1 |
---|---|---|
CUDNN_LOGDEST_DBG not set |
No logging output No performance loss |
No logging output No performance loss |
CUDNN_LOGDEST_DBG=NULL |
No logging output No performance loss |
No logging output No performance loss |
CUDNN_LOGDEST_DBG=stdout or stderr |
No logging output No performance loss |
Logging to stdout or stderr Some performance loss |
CUDNN_LOGDEST_DBG=filename.txt |
No logging output No performance loss |
Logging to filename.txt Some performance loss |
Method 2
Method 2: To use API function calls to enable API logging, refer to the API description of cudnnSetCallback() and cudnnGetCallback().
15. Features Of RNN Functions
For each of these terms, the short-form versions shown in the parenthesis are used in the tables below for brevity: CUDNN_RNN_ALGO_STANDARD (_ALGO_STANDARD), CUDNN_RNN_ALGO_PERSIST_STATIC (_ALGO_PERSIST_STATIC), CUDNN_RNN_ALGO_PERSIST_DYNAMIC (_ALGO_PERSIST_DYNAMIC), and CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION (_ALLOW_CONVERSION).
Functions | Input/output layout supported | Supports variable sequence length in batch | Commonly supported |
---|---|---|---|
cudnnRNNForwardInference | Only Sequence major, packed (non-padded) |
Only with _ALGO_STANDARD Require input sequences descending sorted according to length |
Mode (cell type) supported: CUDNN_RNN_RELU, CUDNN_RNN_TANH, CUDNN_LSTM, CUDNN_GRUAlgo supported1 (see the table below for an elaboration on these algorithms): _ALGO_STANDARD, _ALGO_PERSIST_STATIC, _ALGO_PERSIST_DYNAMIC Math mode supported: CUDNN_DEFAULT_MATH,CUDNN_TENSOR_OP_MATH (will automatically fall back if run on pre-Volta or if algo doesn’t support Tensor Cores) _ALLOW_CONVERSION (may do down conversion to utilize Tensor Cores) Direction mode supported: CUDNN_UNIDIRECTIONAL,
RNN input mode: CUDNN_LINEAR_INPUT, CUDNN_SKIP_INPUT |
cudnnRNNForwardTraining | |||
cudnnRNNBackwardData | |||
cudnnRNNBackwardWeights | |||
cudnnRNNForwardInferenceEx |
Only with _ALGO_STANDARD For unpacked layout2, no input sorting required. For packed layout, require input sequences descending sorted according to length |
||
cudnnRNNForwardTrainingEx | |||
cudnnRNNBackwardDataEx | |||
cudnnRNNBackwardWeightsEx |
Features | _ALGO_STANDARD | _ALGO_PERSIST_STATIC | _ALGO_PERSIST_DYNAMIC |
---|---|---|---|
Half input
Single accumulation Half output |
Supported
Half intermediate storage Single accumulation |
||
Single input
Single accumulation Single output |
Supported
If running on Volta, with CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION*, will down-convert and use half intermediate storage. Otherwise: Single intermediate storage Single accumulation |
||
Double input
Double accumulation Double output |
Supported
Double intermediate storage Double accumulation |
Not Supported | Supported
Double intermediate storage Double accumulation |
LSTM recurrent projection | Supported | Not Supported | Not Supported |
LSTM cell clipping | Supported | ||
Variable sequence length in batch | Supported | Not Supported | Not Supported |
Tensor Cores on Volta/Xavier |
Supported For half input/output, acceleration requires setting CUDNN_TENSOR_OP_MATH3 or CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION 3 Acceleration requires inputSize and hiddenSize to be a multiple of 8 For single input/output, acceleration requires setting CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION3 Acceleration requires inputSize and hiddenSize to be a multiple of 8 |
Not Supported, will execute normally ignoring CUDNN_TENSOR_OP_MATH3 or _ALLOW_CONVERSION3 | |
Other limitations | Max problem size is limited by GPU specifications. | Requires real time compilation through NVRTC |
16. Mixed Precision Numerical Accuracy
For example, when the computation is performed in FP32 and the output is in FP16, the CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0 (ALGO_0) has lower accuracy compared to the CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1 (ALGO_1). This is because ALGO_0 does not use extra workspace, and is forced to accumulate the intermediate results in FP16, i.e., half precision float, and this reduces the accuracy. The ALGO_1, on the other hand, uses additional workspace to accumulate the intermediate values in FP32, i.e., full precision float.
17. Operation Fusion Via The Backend API
Fusion Graph Pattern | Supported Device Compute Capabilities | Supported Data Config and Layout | Supported Engine Types |
---|---|---|---|
Conv_Bias_Add_activation | All that cuDNN supports | Same as cudnnConvolutionBiasActivationForward() | Pattern matching engines, runtime fusion engines4 |
Scale_Bias_Activation_convolution_genStats | Compute capability 70 or above | PSEUDO_HALF_CONFIG, NHWC layout | Pattern matching engines, runtime fusion engines1 |
Convolution_Pointwise1 | Compute capability 75 or above | Flexible | Runtime fusion engines1 |
Gemm_Pointwise1 | Compute capability 75 or above | Flexible | Runtime fusion engines1 |
18. Troubleshooting
18.1. FAQs
Q: Where in the software stack does cuDNN sit? What is the interaction between CUDA, cuDNN, and TensorRT?
Q: I’m not sure if I should use cuDNN for inference or training. How does it compare with TensorRT?
A: cuDNN provides the building blocks for common routines such as convolution, pooling, activation and RNN/LSTMs. You can use cuDNN for both training and inference. However, where it differs from TensorRT is that the latter (TensorRT) is a programmable inference accelerator; just like a framework. TensorRT sees the whole graph and optimizes the network by fusing/combining layers and optimizing kernel selection for improved latency, throughout, power efficiency and for reducing memory requirements.
A rule of thumb you can apply is to check out TensorRT, see if it meets your inference needs, if it doesn't, then look at cuDNN for a closer, more in-depth perspective.
Q: How does heuristics in cuDNN work? How does it know what is the optimal solution for a given problem?
A: NVIDIA actively monitors the Deep Learning space for important problem specifications such as commonly used models. The heuristics are produced by sampling a portion of these problem specifications with available computational choices. Over time, more models are discovered and incorporated into the heuristics.
Q: Is cuDNN going to support running arbitrary graphs?
A: No, we don’t plan to become a framework and execute the whole graph one op at a time. At this time, we are focused on a subgraph given by the user, where we try to produce an optimized fusion kernel. We will document what the rules regarding what can be fused and what cannot. The goal is to support general and flexible fusion, however, it will take time and there will be limits in what it can do in the cuDNN version 8.0.0 launch.
Q: What’s the difference between TensorRT, TensorFlow/XLA’s fusion, and cuDNN’s fusion?
A: TensorRT and TensorFlow are frameworks; they see the whole graph and can do global optimization, however, they generally only fuse pointwise ops together. On the other hand, cuDNN targets a subgraph, but can fuse convolutions with pointwise ops, thus providing potentially better performance. CuDNN fusion kernels can be utilized by TensorRT and TensorFlow/XLA as part of their global graph optimization.
Q: Can I write an application calling cuDNN directly?
A: Yes, you can call the C/C++ API directly. Usually, data scientists would wait for framework integration and use the Python API which is more convenient. However, if your use case requires better performance, you can target the cuDNN API directly.
Q: How does mixed precision training work?
A: Several components need to work together to make mixed precision training possible. CuDNN needs to support the layers with the required datatype config and have optimized kernels that run very fast. In addition, there is a module called automatic mixed precision (AMP) in frameworks which intelligently decides which op can run in a lower precision without affecting convergence and minimize the number of type conversions/transposes in the entire graph. These work together to give you speed up. For more information, see Mixed Precision Numerical Accuracy.
Q: How can I pick the fastest convolution kernels with cuDNN version 8.0.0?
A: In the API introduced in cuDNN v8, convolution kernels are grouped by similar computation and numerical properties into engines. Every engine has a queryable set of performance tuning knobs. A computation case such as a convolution operation graph can be computed using different valid combinations of engines and their knobs, known as an engine configuration. Users can query an array of engine configurations for any given computation case ordered by performance, from fastest to slowest according to cuDNN’s own heuristics. Alternately, users can generate all possible engine configurations by querying the engine count and available knobs for each engine. This generated list could be used for auto-tuning or the user could create their own heuristics.
Q: Why is cuDNN version 8.0 convolution API call much slower on the first call than subsequent calls?
A: Due to the library split, cuDNN version 8.0 API will only load the necessary kernels on the first API call that requires it. In previous versions, this load would have been observed in the first cuDNN API call that triggers CUDA context initialization, typically cudnnCreate(). In version 8.0, this is delayed until the first sub-library call that triggers CUDA context initialization. Users who desire to have CUDA context preloaded can call the new cudnnCnnInferVersionCheck() API (or its related cousins), which has the side effect of initializing a CUDA context. This will reduce the run time for all subsequent API calls.
Q: How do I build the cuDNN version 8.0.0 split library?
A: cuDNN v8.0 library is split into multiple sub-libraries. Each library contains a subset of the API. Users can link directly against the individual libraries or link with a dlopen layer which follows a plugin architecture.
To link against an individual library, users can directly specify it and its dependencies on the linker command line. For example, for infer libraries: -lcudnn_adv_infer, -lcudnn_cnn_infer, or -lcudnn_ops_infer.
For all libraries, -lcudnn_adv_train, -lcudnn_cnn_train, -lcudnn_ops_train, -lcudnn_adv_infer, -lcudnn_cnn_infer, and -lcudnn_ops_infer.
The dependency order is documented in the cuDNN 8.0.0 Preview Release Notes and the cuDNN API Reference.
Alternatively, the user can continue to link against a shim layer (-libcudnn) which can dlopen the correct library that provides the implementation of the function. When the function is called for the first time, the dynamic loading of the library takes place.
-lcudnn
Q: What are the new APIs in cuDNN version 8.0.0?
A: The new cuDNN APIs are listed in the cuDNN 8.0.0 Release Notes as well as in the API Changes For cuDNN 8.0.0.
18.2. How Do I Report A Bug?
- Register for the NVIDIA Developer website.
- Log in to the developer site.
- Click on your name in the upper right corner.
- Click My account > My Bugs and select Submit a New Bug.
- Fill out the bug reporting page. Be descriptive and if possible, provide the steps that you are following to help reproduce the problem.
- Click Submit a bug.
18.3. Support
For questions or to provide feedback, please contact cuDNN@nvidia.com.
19. Acknowledgments
19.1. University of Tennessee
Copyright (c) 2010 The University of Tennessee. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer listed in this license in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holders nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
19.2. University of California, Berkeley
COPYRIGHT All contributions by the University of California: Copyright (c) 2014, The Regents of the University of California (Regents) All rights reserved. All other contributions: Copyright (c) 2014, the respective contributors All rights reserved. Caffe uses a shared copyright model: each contributor holds copyright over their contributions to Caffe. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. LICENSE Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. CONTRIBUTION AGREEMENT By contributing to the BVLC/caffe repository through pull-request, comment, or otherwise, the contributor releases their content to the license and copyright terms herein.
19.3. Facebook AI Research, New York
Copyright (c) 2014, Facebook, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name Facebook nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Additional Grant of Patent Rights "Software" means fbcunn software distributed by Facebook, Inc. Facebook hereby grants you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable (subject to the termination provision below) license under any rights in any patent claims owned by Facebook, to make, have made, use, sell, offer to sell, import, and otherwise transfer the Software. For avoidance of doubt, no license is granted under Facebook’s rights in any patent claims that are infringed by (i) modifications to the Software made by you or a third party, or (ii) the Software in combination with any software or other technology provided by you or a third party. The license granted hereunder will terminate, automatically and without notice, for anyone that makes any claim (including by filing any lawsuit, assertion or other action) alleging (a) direct, indirect, or contributory infringement or inducement to infringe any patent: (i) by Facebook or any of its subsidiaries or affiliates, whether or not such claim is related to the Software, (ii) by any party if such claim arises in whole or in part from any software, product or service of Facebook or any of its subsidiaries or affiliates, whether or not such claim is related to the Software, or (iii) by any party relating to the Software; or (b) that any right in any patent claim of Facebook is invalid or unenforceable.
Notice
Notice
This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.
VESA DisplayPort
DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are trademarks owned by the Video Electronics Standards Association in the United States and other countries.
HDMI
HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.
ARM
ARM, AMBA and ARM Powered are registered trademarks of ARM Limited. Cortex, MPCore and Mali are trademarks of ARM Limited. All other brands or product names are the property of their respective holders. "ARM" is used to represent ARM Holdings plc; its operating company ARM Limited; and the regional subsidiaries ARM Inc.; ARM KK; ARM Korea Limited.; ARM Taiwan Limited; ARM France SAS; ARM Consulting (Shanghai) Co. Ltd.; ARM Germany GmbH; ARM Embedded Technologies Pvt. Ltd.; ARM Norway, AS and ARM Sweden AB.
Trademarks
NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, CUDA Toolkit, cuDNN, DALI, DIGITS, DGX, DGX-1, DGX-2, DGX Station, DLProf, GPU, JetPack, Jetson, Kepler, Maxwell, NCCL, Nsight Compute, Nsight Systems, NVCaffe, NVIDIA Ampere GPU architecture, NVIDIA Deep Learning SDK, NVIDIA Developer Program, NVIDIA GPU Cloud, NVLink, NVSHMEM, PerfWorks, Pascal, SDK Manager, T4, Tegra, TensorRT, TensorRT Inference Server, Tesla, TF-TRT, Triton Inference Server, Turing, and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.