cuTensorNet: A High-Performance Library for Tensor Network Computations

Welcome to the cuTensorNet library documentation!

NVIDIA cuTensorNet is a high-performance library for tensor network computations, a component of the NVIDIA cuQuantum SDK. Functionalities of cuTensorNet are described in Overview with installation and usage guide provided in Getting Started.

Key Features

  • Based on NVIDIA’s high-performance tensor algebra library: cuTENSOR

  • Provides APIs for:

    • Creating a tensor or tensor network

    • Finding a cost-optimal tensor network contraction path for any given tensor network

    • Finding a low-overhead slicing for the tensor network contraction to meet specified memory constraints

    • Tuning the tensor network contraction path finder configuration for better performance

    • Performing tensor network contraction plan generation, auto-tuning, and its subsequent execution

    • Gradually constructing a tensor circuit state (e.g., a quantum circuit state), followed by computing its properties, including arbitrary slices of amplitudes, expectation values, marginal distributions (reduced density matrices), projections on the matrix product state (MPS) space, as well as performing direct sampling of the defined tensor circuit state

    • Compressing the tensor circuit state into the matrix product state (MPS) format

    • Performing backward differentiation (back-propagation) of the tensor network contraction, that is, computing adjoints of the user-specified input tensors, given the adjoint of the output tensor

    • Performing tensor decomposition using QR or SVD

    • Applying a tensor gate (quantum gate) to a pair of connected (contracted) tensors

    • Enabling automatic distributed parallelization in the contraction path finder and executor

    • Enabling custom memory management

    • Logging

Support

  • Supported GPU Architectures: Turing, Ampere, Ada, Hopper, Blackwell

  • Supported OS: Linux

  • Supported CPU Architectures: x86_64, ARM64

Prerequisites