binary_contraction#

nvmath.tensor.binary_contraction(
expr,
a,
b,
*,
c=None,
alpha=1.0,
beta=None,
out=None,
qualifiers=None,
stream=None,
options=None,
execution=None,
)[source]#

Evaluate the Einstein summation convention for binary contraction on the operands.

Explicit as well as implicit form is supported for the Einstein summation expression.

Additionally, the binary contraction can be performed with an additional operand, which is added to the result with a scale factor.

This function-form is a wrapper around the stateful BinaryContraction object APIs and is meant for single use (the user needs to perform just one binary contraction, for example), in which case there is no possibility of amortizing preparatory costs.

Detailed information on what’s happening within this function can be obtained by passing in a logging.Logger object to ContractionOptions or by setting the appropriate options in the root logger object, which is used by default:

>>> import logging
>>> logging.basicConfig(
...     level=logging.INFO,
...     format="%(asctime)s %(levelname)-8s %(message)s",
...     datefmt="%m-%d %H:%M:%S",
... )

A user can select the desired logging level and, in general, take advantage of all of the functionality offered by the Python logging module.

Parameters:
  • expr – The einsum expression to perform the contraction.

  • a – A tensor representing the first operand to the tensor contraction. The currently supported types are numpy.ndarray, cupy.ndarray, and torch.Tensor.

  • b – A tensor representing the second operand to the tensor contraction. The currently supported types are numpy.ndarray, cupy.ndarray, and torch.Tensor.

  • c – (Optional) A tensor representing the operand to add to the tensor contraction result (fused operation in cuTensor). The currently supported types are numpy.ndarray, cupy.ndarray, and torch.Tensor.

  • alpha – The scale factor for the tensor contraction term as a real or complex number. The default is \(1.0\).

  • beta – The scale factor for the tensor addition term as a real or complex number. A value for beta must be provided if the operand to be added is specified.

  • out – (Optional) The output tensor to store the result of the contraction. Must be a numpy.ndarray, cupy.ndarray, or torch.Tensor object and must be on the same device as the input operands. If not specified, the result will be returned on the same device as the input operands.

  • note:: (..) – The support of output tensor in the API is experimental and subject to change in future versions without prior notice.

  • qualifiers – If desired, specify the operators as a numpy.ndarray of dtype tensor_qualifiers_dtype with the same length as the number of operands in the contraction expression plus one (for the operand to be added). All elements must be valid Operator objects. See Matrix and Tensor Qualifiers for the motivation behind qualifiers.

  • stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include cudaStream_t (as Python int), cupy.cuda.Stream, and torch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.

  • options – Specify options for the tensor contraction as a ContractionOptions object. Alternatively, a dict containing the parameters for the ContractionOptions constructor can also be provided. If not specified, the value will be set to the default-constructed ContractionOptions object.

  • execution – Specify execution space options for the tensor contraction as a ExecutionCUDA object or a string ‘cuda’. Alternatively, a dict containing ‘name’ key set to ‘cuda’ and the additional parameters for the ExecutionCUDA constructor can also be provided. If not provided, the execution space will be selected to match operand’s storage if the operands are on the GPU. If the operands are on the CPU and execution space is not provided, the execution space will be a default-constructed ExecutionCUDA object with device_id = 0.

Returns:

The result of the specified contraction, which remains on the same device and belong to the same package as the input operands.

See also

BinaryContraction, ternary_contraction(), TernaryContraction, ContractionOptions, ContractionPlanPreferences

For tensor network contraction with arbitrary number of operands including contraction path finding, see cuQuantum:

Examples

>>> import cupy as cp
>>> import nvmath

Create three float32 ndarrays on the GPU:

>>> M, N = 32, 64
>>> a = cp.random.rand(M, M, N, N, dtype=cp.float32)
>>> b = cp.random.rand(N, N, N, N, dtype=cp.float32)
>>> c = cp.random.rand(M, M, N, N, dtype=cp.float32)

Perform the operation \(\alpha \sum A[i,j,a,b] * B[a,b,c,d] + \beta C[i,j,c,d]\) using binary_contraction(). The result r is also a CuPy float32 ndarray:

>>> r = nvmath.tensor.binary_contraction(
...     "ijab,abcd->ijcd", a, b, c=c, alpha=1.23, beta=0.74
... )

The result is equivalent to:

>>> r = 1.23 * cp.einsum("ijab,abcd->ijcd", a, b) + 0.74 * c

Options can be provided to customize the operation:

>>> compute_type = nvmath.bindings.cutensor.ComputeDesc.COMPUTE_3XTF32()
>>> o = nvmath.tensor.ContractionOptions(compute_type=compute_type)
>>> r = nvmath.tensor.binary_contraction("ijab,abcd->ijcd", a, b, options=o)

See ContractionOptions for the complete list of available options.

The package current stream is used by default, but a stream can be explicitly provided to the binary contraction operation. This can be done if the operands are computed on a different stream, for example:

>>> s = cp.cuda.Stream()
>>> with s:
...     a = cp.random.rand(M, M, N, N)
...     b = cp.random.rand(N, N, N, N)
>>> r = nvmath.tensor.binary_contraction("ijab,abcd->ijcd", a, b, stream=s)

The operation above runs on stream s and is ordered with respect to the input computation.

Create NumPy ndarrays on the CPU.

>>> import numpy as np
>>> a = np.random.rand(M, M, N, N)
>>> b = np.random.rand(N, N, N, N)

Provide the NumPy ndarrays to binary_contraction(), with the result also being a NumPy ndarray:

>>> r = nvmath.tensor.binary_contraction("ijab,abcd->ijcd", a, b)

Notes

  • This function is a convenience wrapper around BinaryContraction and is specifically meant for single use.

Further examples can be found in the nvmath/examples/tensor/contraction directory.