cuquantum.contract¶
- cuquantum.contract(subscripts, *operands, options=None, optimize=None, stream=None, return_info=False)[source]¶
Evaluate the Einstein summation convention on the operands.
Explicit as well as implicit form is supported for the Einstein summation expression. In addition to the subscript format, the interleaved format is also supported as a means of specifying the operands and their mode labels. See
Network
for more detail on the types of operands as well as for examples.- Parameters
subscripts – The mode labels (subscripts) defining the Einstein summation expression as a comma-separated sequence of characters. Unicode characters are allowed in the expression thereby expanding the size of the tensor network that can be specified using the Einstein summation convention.
operands – A sequence of tensors (ndarray-like objects). The currently supported types are
numpy.ndarray
,cupy.ndarray
, andtorch.Tensor
.qualifiers – Specify the tensor qualifiers as a
numpy.ndarray
oftensor_qualifiers_dtype
objects of length equal to the number of operands.options – Specify options for the tensor network as a
NetworkOptions
object. Alternatively, adict
containing the parameters for theNetworkOptions
constructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptions
object.optimize – This parameter specifies options for path optimization as an
OptimizerOptions
object. Alternatively, a dictionary containing the parameters for theOptimizerOptions
constructor can also be provided. If not specified, the value will be set to the default-constructedOptimizerOptions
object.stream – Provide the CUDA stream to use for the autotuning operation. Acceptable inputs include
cudaStream_t
(as Pythonint
),cupy.cuda.Stream
, andtorch.cuda.Stream
. If a stream is not provided, the current stream will be used.return_info – If true, information about the best contraction order will also be returned.
- Returns
If
return_info
isFalse
, the output tensor (ndarray-like object) of the same type and on the same device as the operands containing the result of the contraction; otherwise, a 2-tuple consisting of the output tensor and anOptimizerInfo
object that contains information about the best contraction order etc.
Note
It is encouraged for users to maintain the library handle themselves so as to reduce the context initialization time:
from cuquantum import cutensornet as cutn from cuquantum import contract, NetworkOptions handle = cutn.create() network_opts = NetworkOptions(handle=handle, ...) out = contract(..., options=network_opts, ...) # ... the same handle can be reused for further calls ... # when it's done, remember to destroy the handle cutn.destroy(handle)
Examples
Use NumPy operands:
>>> from cuquantum import contract >>> import numpy as np >>> a = np.arange(6.).reshape(3, 2) >>> b = np.arange(6.).reshape(2, 3)
Perform matrix multiplication using the explicit form. The result
r
is a NumPy ndarray (with the computation performed on the GPU):>>> r = contract('ij,jk->ik', a, b)
Implicit form:
>>> r = contract('ij,jk', a, b)
Interleaved format using characters for mode labels:
>>> r = contract(a, ['i', 'j'], b, ['j', 'k'], ['i', 'k'], return_info=True)
Interleaved format using string labels for mode labels and implicit form:
>>> r = contract(a, ['first', 'second'], b, ['second', 'third'])
Interleaved format using integer mode labels and explicit form:
>>> r = contract(a, [1, 2], b, [2, 3], [1, 3])
Obtain information
i
on the best contraction path along with the resultr
:>>> r, i = contract('ij,jk', a, b, return_info=True)
Provide options for the tensor network:
>>> from cuquantum import NetworkOptions >>> n = NetworkOptions(device_id=1) >>> r = contract('ij,jk->ik', a, b, options=n)
Alternatively, the options can be provided as a dict instead of a
NetworkOptions
object:>>> r = contract('ij,jk->ik', a, b, options={'device_id': 1})
Specify options for the optimizer:
>>> from cuquantum import OptimizerOptions, PathFinderOptions >>> p = PathFinderOptions(imbalance_factor=230, cutoff_size=8) >>> o = OptimizerOptions(path=p, seed=123) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Alternatively, the options above can be provided as a dict:
>>> r = contract('ij,jk,kl', a, b, a, optimize={'path': {'imbalance_factor': 230, 'cutoff_size': 8}, 'seed': 123})
Specify the path directly:
>>> o = OptimizerOptions(path = [(0,2), (0,1)]) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Perform elementwise multiplication \(a \odot b^T\) using the ellipsis shorthand notation:
>>> r = contract('...,...', a, b.T)
Obtain the double inner product \(a : b^T\) (Frobenius inner product for real-valued tensors) using the ellipsis shorthand notation:
>>> r = contract('...,...->', a, b.T)
Use CuPy operands. The result
r
is a CuPy ndarray on the same device as the operands, anddev
is any valid device ID on your system that you wish to use to store the tensors and compute the contraction:>>> import cupy >>> dev = 0 >>> with cupy.cuda.Device(dev): ... a = cupy.arange(6.).reshape(3, 2) ... b = cupy.arange(6.).reshape(2, 3) >>> r = contract('ij,jk', a, b)
For PyTorch operands, this function works like a native PyTorch operator out of box that will be tracked by the autograd graph so as to enable backpropagation. The result
r
is a PyTorch tensor on the same device (dev
) as the operands. To enable gradient computation, just set the target operands’requires_grad
attribute toTrue
, as usual. Ifstream
is explicitly passed, the user must establish the stream ordering following the requirements outlined in PyTorch’s CUDA Semantics.>>> import torch >>> dev = 0 >>> a = torch.arange(6., device=f'cuda:{dev}').reshape(3, 2) >>> a.requires_grad_(True) >>> b = torch.arange(6., device=f'cuda:{dev}').reshape(2, 3) >>> b.requires_grad_(True) >>> r = contract('ij,jk', a, b) >>> r.backward(torch.ones_like(r)) # gradient w.r.t self is 1 >>> a.grad tensor([[ 3., 12.], [ 3., 12.], [ 3., 12.]], device='cuda:0') >>> b.grad tensor([[6., 6., 6.], [9., 9., 9.]], device='cuda:0')