*********************************************************************** cuTensorNet: A High-Performance Library for Tensor Network Computations *********************************************************************** Welcome to the cuTensorNet library documentation! **NVIDIA cuTensorNet** is a high-performance library for tensor network computations, a component of the :doc:`NVIDIA cuQuantum SDK <../index>`. Functionalities of *cuTensorNet* are described in :doc:`Overview ` with installation and usage guide provided in :doc:`Getting Started <../getting-started/index>`. .. TODO: mention what cuTensorNet can be used in addition to quantum circuit simulations. We don't want to limit ourselves to quantum information science. Tensor networks can be used in many areas beyond that. .. topic:: Key Features * Based on NVIDIA's high-performance tensor algebra library: `cuTENSOR `_ * Provides APIs for: - Creating a tensor or tensor network - Finding a cost-optimal tensor network contraction path for any given tensor network - Finding a low-overhead slicing for the tensor network contraction to meet specified memory constraints - Tuning the tensor network contraction path finder configuration for better performance - Performing tensor network contraction plan generation, auto-tuning, and its subsequent execution - Gradually constructing a tensor circuit state (e.g., a quantum circuit state), followed by computing its properties, including arbitrary slices of amplitudes, expectation values, marginal distributions (reduced density matrices), projections on the matrix product state (MPS) space, as well as performing direct sampling of the defined tensor circuit state - Compressing the tensor circuit state into the matrix product state (MPS) format - Performing backward differentiation (back-propagation) of the tensor network contraction, that is, computing adjoints of the user-specified input tensors, given the adjoint of the output tensor - Performing tensor decomposition using QR or SVD - Applying a tensor gate (quantum gate) to a pair of connected (contracted) tensors - Enabling automatic distributed parallelization in the contraction path finder and executor - Enabling custom memory management - Logging .. topic:: Support * *Supported GPU Architectures*: ``Turing``, ``Ampere``, ``Ada``, ``Hopper``, ``Blackwell`` * *Supported OS*: ``Linux`` * *Supported CPU Architectures*: ``x86_64``, ``ARM64`` .. topic:: Prerequisites * One of the following CUDA Toolkits and a compatible driver are required: .. list-table:: :widths: 25 50 :header-rows: 1 * - CUDA Toolkit - Minimum Required Linux Driver Version * - `CUDA® 12.x `_ - >= 525.60.13 * - `CUDA® 13.x `_ - >= 580.65.06 Please refer to `CUDA Toolkit Release Notes `_ for the details. * `cuTENSOR v2.3.1 `_. .. toctree:: :caption: Contents :maxdepth: 2 release-notes overview examples api/index acknowledgements