Tensor Operations#

Overview#

The tensor module nvmath.tensor in nvmath-python provides APIs for tensor operations powered by the high-performance NVIDIA cuTENSOR library. We currently offer binary and ternary contraction APIs supporting the CUDA execution space.

For contracting a tensor network, refer to the Network API from the cuQuantum library. While network contraction can be used for binary and ternary contraction, the focus here is on the optimal contraction of a tensor network and therefore not all options pertinent to each pairwise contraction are available to the user. The generalized binary \(\alpha \; a \cdot b + \beta \; c\) and ternary \(\alpha \; a \cdot b \cdot c + \beta \; d\) contraction operations (where \(\cdot\) represents tensor contraction) in this module are fused, and support options specific to efficient execution of these operations.

import cupy as cp
from cupyx.profiler import benchmark

import nvmath

a = cp.random.rand(64, 8, 8, 6, 6)
b = cp.random.rand(64, 8, 8, 6, 6)

# Create a stateful BinaryContraction object 'contraction'.
with nvmath.tensor.BinaryContraction("pijkl,pjiab->lakbp", a, b) as contraction:
    # Get the handle to the plan preference object
    plan_preference = contraction.plan_preference
    # update the kernel rank to the third best for the underlying algorithm
    plan_preference.kernel_rank = 2

    for algo in (
        nvmath.tensor.ContractionAlgo.DEFAULT_PATIENT,
        nvmath.tensor.ContractionAlgo.GETT,
        nvmath.tensor.ContractionAlgo.TGETT,
        nvmath.tensor.ContractionAlgo.TTGT,
        nvmath.tensor.ContractionAlgo.DEFAULT,
    ):
        print(f"Algorithm: {algo.name}")
        plan_preference.algo = algo
        # Plan the Contraction to activate the updated plan preference
        contraction.plan()
        print(benchmark(contraction.execute, n_repeat=20))

More examples of tensor operations can be found on our GitHub repository.

Host API Reference#

Tensor Operations (nvmath.tensor)#

binary_contraction(expr, a, b, *[, c, ...])

Evaluate the Einstein summation convention for binary contraction on the operands.

ternary_contraction(expr, a, b, c, *[, d, ...])

Evaluate the Einstein summation convention for ternary contraction on the operands.

tensor_qualifiers_dtype

alias of int32

BinaryContraction(expr, a, b, *[, c, out, ...])

Create a stateful object encapsulating the specified binary tensor contraction \(\alpha a @ b + \beta c\) and the required resources to perform the operation.

TernaryContraction(expr, a, b, c, *[, d, ...])

Create a stateful object encapsulating the specified ternary tensor contraction \(\alpha a @ b + \beta c\) and the required resources to perform the operation.

ContractionAlgo

alias of Algo

ContractionAutotuneMode

alias of AutotuneMode

ContractionJitMode

alias of JitMode

ContractionCacheMode

alias of CacheMode

ComputeDesc()

See cutensorComputeDescriptor_t.

ContractionPlanPreference(contraction)

An interface to configure nvmath.tensor.BinaryContraction.plan() and nvmath.tensor.TernaryContraction.plan().

Operator(value[, names, module, qualname, ...])

See cutensorOperator_t.

ContractionOptions([compute_type, logger, ...])

A data class for providing options to the BinaryContraction and TernaryContraction objects, or the wrapper functions binary_contraction`and :func:`ternary_contraction().

ExecutionCUDA([device_id])

A data class for providing GPU execution options to the BinaryContraction and TernaryContraction objects, or the wrapper functions binary_contraction`and :func:`ternary_contraction().