Abstract

This cuDNN 8.6.0 Installation Guide provides step-by-step instructions on how to install and check for correct operation of NVIDIA cuDNN on Linux and Microsoft Windows systems.

For previously released cuDNN installation documentation, refer to the NVIDIA cuDNN Archives.

1. Overview

The NVIDIA® CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.

Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks and is freely available to members of the NVIDIA Developer Program™.

2. Installing cuDNN on Linux

2.1. Prerequisites

For the latest compatibility software versions of the OS, NVIDIA CUDA, the CUDA driver, and the NVIDIA hardware, see the NVIDIA cuDNN Support Matrix.

2.1.1. Installing NVIDIA Graphics Drivers

Install up-to-date NVIDIA graphics drivers on your Linux system.

Procedure

  1. Go to: NVIDIA download drivers
  2. Select the GPU and OS version from the drop-down menus.
  3. Download and install the NVIDIA graphics driver as indicated on that web page. For more information, select the ADDITIONAL INFORMATION tab for step-by-step instructions for installing a driver.
  4. Restart your system to ensure that the graphics driver takes effect.

2.1.2. Installing the CUDA Toolkit for Linux

Refer to the following instructions for installing CUDA on Linux, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Linux.

2.1.3. Installing Zlib

About this task

For Ubuntu users, to install the zlib package, run:
sudo apt-get install zlib1g
For RHEL users, to install the zlib package, run:
sudo yum install zlib

2.2. Downloading cuDNN for Linux

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version that you want to install. A list of available resources displays.

2.3. Installing on Linux

The following steps describe how to build a cuDNN dependent program. Choose the installation method that meets your environment needs. For example, the tar file installation applies to all Linux platforms. The Debian package installation applies to Debian 11, Ubuntu 18.04, 20.04, and 22.04. The RPM package installation applies to RHEL 7 and 8.

About this task

In the following sections:
  • your CUDA directory path is referred to as /usr/local/cuda/
  • your cuDNN download path is referred to as <cudnnpath>

2.3.1. Tar File Installation

Before issuing the following commands, you must replace X.Y and v8.x.x.x with your specific CUDA and cuDNN versions and package date.

Procedure

  1. Navigate to your <cudnnpath> directory containing the cuDNN tar file.
  2. Unzip the cuDNN package.
    $ tar -xvf cudnn-linux-x86_64-8.x.x.x_cudaX.Y-archive.tar.xz
  3. Copy the following files into the CUDA toolkit directory.
    $ sudo cp cudnn-*-archive/include/cudnn*.h /usr/local/cuda/include 
    $ sudo cp -P cudnn-*-archive/lib/libcudnn* /usr/local/cuda/lib64 
    $ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
    

2.3.2. Debian Local Installation

Download the Debian local repository installation package. Before issuing the following commands, you must replace X.Y and 8.x.x.x with your specific CUDA and cuDNN versions.

Procedure

  1. Navigate to your <cudnnpath> directory containing the cuDNN Debian local installer file.
  2. Enable the local repository.
    sudo dpkg -i cudnn-local-repo-${OS}-8.x.x.x_1.0-1_amd64.deb
    Or
    sudo dpkg -i cudnn-local-repo-${OS}-8.x.x.x_1.0-1_arm64.deb
  3. Import the CUDA GPG key.
    sudo cp /var/cudnn-local-repo-*/cudnn-local-*-keyring.gpg /usr/share/keyrings/
  4. Refresh the repository metadata.
    sudo apt-get update
  5. Install the runtime library.
    sudo apt-get install libcudnn8=8.x.x.x-1+cudaX.Y
  6. Install the developer library.
    sudo apt-get install libcudnn8-dev=8.x.x.x-1+cudaX.Y
  7. Install the code samples and the cuDNN library documentation.
    sudo apt-get install libcudnn8-samples=8.x.x.x-1+cudaX.Y

2.3.3. RPM Local Installation

Download the RPM local repository installation package. Before issuing the following commands, you must replace X.Y and 8.x.x.x with your specific CUDA and cuDNN versions.

Procedure

  1. Navigate to your <cudnnpath> directory containing the cuDNN RPM local installer file.
  2. Enable the local repository.
    sudo rpm -i cudnn-local-repo-${OS}-8.x.x.x-1.0-1.x86_64.rpm
    Or
    sudo rpm -i cudnn-local-repo-${OS}-8.x.x.x-1.0-1.aarch64.rpm
  3. Refresh the repository metadata.
    sudo yum clean all
  4. Install the runtime library.
    sudo yum install libcudnn8-8.x.x.x-1.cudaX.Y
  5. Install the developer library.
    sudo yum install libcudnn8-devel-8.x.x.x-1.cudaX.Y
  6. Install the code samples and the cuDNN library documentation.
    sudo yum install libcudnn8-samples-8.x.x.x-1.cudaX.Y

2.3.4. Package Manager Installation

The Package Manager installation interfaces with your system's package manager.

If the actual installation packages are available online, then the package manager will automatically download them and install them.

2.3.4.1. Ubuntu Network Installation

These are the installation instructions for Debian 11, Ubuntu 18.04, 20.04, and 22.04 users.
Procedure
  1. Enable the repository. The following commands enable the repository containing information about the appropriate cuDNN libraries online for Debian 11, Ubuntu 18.04, 20.04, and 22.04.
    wget https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.pin 
    
    sudo mv cuda-${OS}.pin /etc/apt/preferences.d/cuda-repository-pin-600
    sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/3bf863cc.pub
    sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/ /"
    sudo apt-get update
    

    Where ${OS} is debian11, ubuntu1804, ubuntu2004, or ubuntu2204.

  2. Install the cuDNN library:
    sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version}
    sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}
    Where:
    • ${cudnn_version} is 8.6.0.*
    • ${cuda_version} is cuda10.2 or cuda11.8

2.3.4.2. RHEL Network Installation

These are the installation instructions for RHEL7 and RHEL8 users.
Procedure
  1. Enable the repository:
    sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.repo
    
       sudo yum clean all
    

    Where ${OS} is rhel7 or rhel8.

  2. Install the cuDNN library:
    sudo yum install libcudnn8=${cudnn_version}-1.${cuda_version}
    sudo yum install libcudnn8-devel=${cudnn_version}-1.${cuda_version}
    Where:
    • ${cudnn_version} is 8.6.0.*
    • ${cuda_version} is cuda10.2 or cuda11.8

2.4. Verifying the Install on Linux

To verify that cuDNN is installed and is running properly, compile the mnistCUDNN sample located in the /usr/src/cudnn_samples_v8 directory in the Debian file.

Procedure

  1. Copy the cuDNN samples to a writable path.
    $cp -r /usr/src/cudnn_samples_v8/ $HOME
  2. Go to the writable path.
    $ cd  $HOME/cudnn_samples_v8/mnistCUDNN
  3. Compile the mnistCUDNN sample.
    $make clean && make
  4. Run the mnistCUDNN sample.
    $ ./mnistCUDNN
    If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following:
    Test passed!

2.5. Upgrading from cuDNN 7.x.x to cuDNN 8.x.x

Since version 8 can coexist with previous versions of cuDNN, if the user has an older version of cuDNN such as v6 or v7, installing version 8 will not automatically delete an older revision. Therefore, if the user wants the latest version, install cuDNN version 8 by following the installation steps.

About this task

To upgrade from cuDNN v7 to v8, refer to the Package Manager Installation section and follow the steps for your OS.

To switch between v7 and v8 installations, issue sudo update-alternatives --config libcudnn and choose the appropriate cuDNN version.

2.6. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

3. Installing cuDNN on Windows

3.1. Prerequisites

For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, refer to the NVIDIA cuDNN Support Matrix.

3.1.1. Installing NVIDIA Graphic Drivers

Install up-to-date NVIDIA graphics drivers on your Windows system.

Procedure

  1. Go to: NVIDIA download drivers
  2. Select the GPU and OS version from the drop-down menus.
  3. Download and install the NVIDIA driver as indicated on that web page. For more information, select the ADDITIONAL INFORMATION tab for step-by-step instructions for installing a driver.
  4. Restart your system to ensure that the graphics driver takes effect.

3.1.2. Installing the CUDA Toolkit for Windows

Refer to the following instructions for installing CUDA on Windows, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Windows.

3.1.3. Installing Zlib

Zlib is a data compression software library that is needed by cuDNN.

Procedure

  1. Download and extract the zlib package from ZLIB DLL. Users with a 32-bit machine should download the 32-bit ZLIB DLL.
    Note: If using Chrome, the file may not automatically download. If this happens, right-click the link and choose Save link as…. Then, paste the URL into a browser window.
  2. Add the directory path of zlibwapi.dll to the environment variable PATH.

3.2. Downloading cuDNN for Windows

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version that you want to install. A list of available resources displays.
  6. Download the cuDNN package for Windows (zip).

3.3. Installing on Windows

The following steps describe how to build a cuDNN dependent program.

About this task

You must replace 8.x and 8.x.y.z with your specific cuDNN version.

Package installation (zip)

In the following steps, the package directory path is referred to as <packagepath>.

Procedure

  1. Navigate to your <packagepath> directory containing the cuDNN package.
  2. Unzip the cuDNN package.
    cudnn-windows-x86_64-*-archive.zip
  3. Copy the following files from the unzipped package into the NVIDIA cuDNN directory.
    1. Copy bin\cudnn*.dll to C:\Program Files\NVIDIA\CUDNN\v8.x\bin.
    2. Copy include\cudnn*.h to C:\Program Files\NVIDIA\CUDNN\v8.x\include.
    3. Copy lib\cudnn*.lib to C:\Program Files\NVIDIA\CUDNN\v8.x\lib.
  4. Set the following environment variable to point to where cuDNN is located. To access the value of the $(PATH) environment variable, perform the following steps:
    1. Open a command prompt from the Start menu.
    2. Type Run and hit Enter.
    3. Issue the control sysdm.cpl command.
    4. Select the Advanced tab at the top of the window.
    5. Click Environment Variables at the bottom of the window.
    6. Add the NVIDIA cuDNN bin directory path to the PATH variable:
      Variable Name: PATH 
      Value to Add: C:\Program Files\NVIDIA\CUDNN\v8.x\bin
      
  5. Add cuDNN to your Visual Studio project.
    1. Open the Visual Studio project, right-click on the project name in Solution Explorer, and choose Properties.
    2. Click VC++ Directories and append C:\Program Files\NVIDIA\CUDNN\v8.x\include to the Include Directories field.
    3. Click Linker > General and append C:\Program Files\NVIDIA\CUDNN\v8.x\lib to the Additional Library Directories field.
    4. Click Linker > Input and append cudnn.lib to the Additional Dependencies field and click OK.

3.4. Upgrading cuDNN

Navigate to the directory containing cuDNN and delete the old cuDNN bin, lib, and header files. Remove the path to the directory containing cuDNN from the $(PATH) environment variable. Reinstall a newer cuDNN version by following the steps in Installing on Windows.

3.5. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

4. Cross-Compiling cuDNN Samples

This section describes how to cross-compile cuDNN samples.

4.1. NVIDIA DRIVE OS Linux

Follow the steps in this section to cross-compile cuDNN samples on NVIDIA DRIVE® OS Linux.

4.1.1. Installing the CUDA Toolkit for NVIDIA DRIVE OS

Before issuing the following commands, you must replace x-x with your specific CUDA version.

Procedure

  1. Download the Ubuntu package: cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:
    sudo dpkg -i cuda*ubuntu*_amd64.deb
    sudo dpkg -i cuda*-cross-aarch64*_all.deb
    sudo apt-get update
    sudo apt-get install cuda-toolkit-x-x -y
    sudo apt-get install cuda-cross-aarch64* -y

4.1.2. Installing cuDNN for NVIDIA DRIVE OS

Procedure

  1. Download the cuDNN Debian for x86 HOST.
    cudnn-local-repo-ubuntu2004-*amd64.deb
  2. Download the cuDNN cross Debian for cross-compiling.
    cudnn[local/prune-87]repo-cross-aarch64-ubuntu2004*all.deb
  3. Install the cuDNN Debians for Linux.
    sudo dpkg -i cudnn-local-repo-ubuntu2004*amd64.deb
    sudo apt update
    sudo apt install libcudnn8
    sudo apt install libcudnn8-dev
    sudo apt install libcudnn8-samples
    
  4. Install the cuDNN Debians for cross-compiling on Linux.
    sudo dpkg -i cudnn[local/prune-87]repo-cross-aarch64-ubuntu2004*all.deb 
    sudo apt update
    sudo apt install libcudnn8-cross-aarch64
    

4.1.3. Cross-Compiling cuDNN Samples for NVIDIA DRIVE OS

Procedure

  1. Copy the cudnn_samples_v8 directory to your home directory:
    $ cp -r /usr/src/cudnn_samples_v8 $HOME
  2. For each sample, execute the following commands:
    $ cd $HOME/cudnn_samples_v8/(each sample)
    $ make TARGET_ARCH=aarch64
    

4.2. NVIDIA DRIVE OS QNX

Follow the steps in this section to cross-compile cuDNN samples on NVIDIA DRIVE OS for QNX.

4.2.1. Installing the CUDA Toolkit for QNX

Before issuing the following commands, you must replace x-x with your specific CUDA version.

Procedure

  1. Download the Ubuntu package.
    cuda*ubuntu*_amd64.deb
  2. Download the cross compile package.
    cuda*-cross-aarch64*_all.deb
  3. Download the minimal toolkit package.
    cuda-repo-minimal-toolkit*_amd64.deb
  4. Execute the following commands:
    sudo dpkg -i cuda*ubuntu*_amd64.deb
    sudo dpkg -i cuda*-cross-aarch64*_all.deb
    sudo dpkg -i cuda-repo-minimal-toolkit*_amd64.deb
    sudo apt update
    sudo apt install cuda-qnx-standard-cross-qnx* -y
    sudo apt install cuda-toolkit-x-x -y
    sudo apt install cuda-qnx-safe-toolkit-x-x -y
    

4.2.2. Installing cuDNN for QNX

Procedure

  1. Download the cuDNN Debian for x86 HOST.
    cudnn-local-repo-ubuntu2004*amd64.deb
  2. Download the cuDNN cross Debian for cross-compiling.
    cudnn-[local/prune-87]repo-cross-aarch64-qnx*_all.deb
  3. Execute the following commands:
    sudo dpkg -i cudnn-local-repo-ubuntu2004*amd64.deb
    sudo dpkg -i cudnn[local/prune-87]repo-cross-aarch64-qnx*_all.deb
    sudo apt update
    sudo apt install libcudnn8-cross-qnx
    sudo apt install libcudnn8-dev
    sudo apt install libcudnn8-samples
    

4.2.3. Set the Environment Variables

To set the environment variables, issue the following commands:

About this task

export QNX_HOST={flash_dir}/toolchains/qnx_toolchain/host/linux/x86_64
export QNX_TARGET={flash_dir}/toolchains/qnx_toolchain/target/qnx7
export HOST_COMPILER=$QNX_HOST/usr/bin/aarch64-unknown-nto-qnx7.1.0-g++
export CUDNN_LIB_PATH=/usr/lib/aarch64-unknown-nto-qnx
export CUDNN_INCLUDE_PATH=/usr/include/aarch64-unknown-nto-qnx
export CUDA_PATH=/usr/local/cuda-safe-11.4/targets/aarch64-qnx
export PATH=$PATH:$QNX_HOST/usr/bin

4.2.4. Cross-Compiling cuDNN Samples for QNX

Procedure

  1. Copy the cudnn_samples_v8 directory to your home directory.
    $ cp -r /usr/src/cudnn_samples_v8 $HOME
  2. For each sample, execute the following commands:
    cd $HOME/cudnn_samples_v8/(each sample)
    make TARGET_OS=QNX TARGET_ARCH=aarch64 NVCC=/usr/local/cuda-safe-11.4/bin/nvcc
    

4.3. Linux AArch64 SBSA

Follow the steps in this section to cross-compile cuDNN samples on Linux AArch64. Linux AArch64 incorporates ARM® based CPU cores for Server Base System Architecture (SBSA).

4.3.1. Installing the CUDA Toolkit for Linux AArch64 SBSA

Before issuing the following commands, you must replace x-x with your specific CUDA version.

Procedure

  1. Download the Ubuntu package: cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:

    sudo dpkg -i cuda*ubuntu*_amd64.deb

    sudo apt-get update

    sudo apt-get install cuda-toolkit-x-x -y

    sudo apt-get install cuda-cross-aarch64* -y

4.3.2. Installing cuDNN for Linux AArch64 SBSA

Procedure

  1. Download the cuDNN Ubuntu package for your preferred CUDA toolkit version.
    cudnn-local-repo-*_amd64.deb
  2. Download the cross compile package.
    cudnn-local-repo-cross-sbsa-*_all.deb
  3. Execute the following commands:
    sudo dpkg -i cudnn-local-repo-*_amd64.deb
    
    sudo apt-get update
    
    sudo apt-get install libcudnn8 libcudnn8-dev libcudnn-samples -y
    
    sudo dpkg -i cudnn-local-repo-cross-sbsa-*_all.deb
    
    sudo apt-get update
    
    sudo apt-get install libcudnn8-cross-sbsa -y
    
  4. Install AArch64 host compiler.
    sudo apt install g++-aarch64-linux-gnu

4.3.3. Cross-Compiling cuDNN Samples for Linux AArch64 SBSA

Procedure

  1. Copy the cudnn_samples_v8 directory to your home directory:
    $ cp -r /usr/src/cudnn_samples_v8 $HOME
  2. For each sample, execute the following commands:
    $ cd $HOME/cudnn_samples_v8/(each sample)
    $ sudo make TARGET_ARCH=aarch64 SBSA=1
    

5.1. Appendix

ACKNOWLEDGEMENTS

NVIDIA would like to thank the following individuals and institutions for their contributions:
  • This product includes zlib - a general purpose compression library https://zlib.net/ Copyright © 1995-2017 Jean-loup Gailly and Mark Adler
  • This product includes zstr - a C++ zlib wrapper https://github.com/mateidavid/zstr Copyright © 2015 Matei David, Ontario Institute for Cancer Research
  • This product includes RapidJSON - A fast JSON parser/generator for C++ with both SAX/DOM style API https://github.com/Tencent/rapidjson Copyright © 2015 THL A29 Limited, a Tencent company, and Milo Yip.

Notice

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

Arm

Arm, AMBA and Arm Powered are registered trademarks of Arm Limited. Cortex, MPCore and Mali are trademarks of Arm Limited. "Arm" is used to represent Arm Holdings plc; its operating company Arm Limited; and the regional subsidiaries Arm Inc.; Arm KK; Arm Korea Limited.; Arm Taiwan Limited; Arm France SAS; Arm Consulting (Shanghai) Co. Ltd.; Arm Germany GmbH; Arm Embedded Technologies Pvt. Ltd.; Arm Norway, AS and Arm Sweden AB.

Trademarks

NVIDIA, the NVIDIA logo, and CUDA, DRIVE, JetPack, Kepler, Maxwell, Pascal, Turing, Volta and Xavier are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.