cuquantum.contract¶
- cuquantum.contract(subscripts, *operands, options=None, optimize=None, stream=None, return_info=False)[source]¶
Evaluate the Einstein summation convention on the operands.
Explicit as well as implicit form is supported for the Einstein summation expression. In addition to the subscript format, the interleaved format is also supported as a means of specifying the operands and their mode labels. See
Networkfor more detail on the types of operands as well as for examples.- Parameters
subscripts – The mode labels (subscripts) defining the Einstein summation expression as a comma-separated sequence of characters. Unicode characters are allowed in the expression thereby expanding the size of the tensor network that can be specified using the Einstein summation convention.
operands – A sequence of tensors (ndarray-like objects). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.qualifiers – Specify the tensor qualifiers as a
numpy.ndarrayoftensor_qualifiers_dtypeobjects of length equal to the number of operands.options – Specify options for the tensor network as a
NetworkOptionsobject. Alternatively, adictcontaining the parameters for theNetworkOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptionsobject.optimize – This parameter specifies options for path optimization as an
OptimizerOptionsobject. Alternatively, a dictionary containing the parameters for theOptimizerOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedOptimizerOptionsobject.stream – Provide the CUDA stream to use for the autotuning operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.return_info – If true, information about the best contraction order will also be returned.
- Returns
If
return_infoisFalse, the output tensor (ndarray-like object) of the same type and on the same device as the operands containing the result of the contraction; otherwise, a 2-tuple consisting of the output tensor and anOptimizerInfoobject that contains information about the best contraction order etc.
Note
It is encouraged for users to maintain the library handle themselves so as to reduce the context initialization time:
from cuquantum import cutensornet as cutn from cuquantum import contract, NetworkOptions handle = cutn.create() network_opts = NetworkOptions(handle=handle, ...) out = contract(..., options=network_opts, ...) # ... the same handle can be reused for further calls ... # when it's done, remember to destroy the handle cutn.destroy(handle)
Examples
Use NumPy operands:
>>> from cuquantum import contract >>> import numpy as np >>> a = np.arange(6.).reshape(3, 2) >>> b = np.arange(6.).reshape(2, 3)
Perform matrix multiplication using the explicit form. The result
ris a NumPy ndarray (with the computation performed on the GPU):>>> r = contract('ij,jk->ik', a, b)
Implicit form:
>>> r = contract('ij,jk', a, b)
Interleaved format using characters for mode labels:
>>> r = contract(a, ['i', 'j'], b, ['j', 'k'], ['i', 'k'], return_info=True)
Interleaved format using string labels for mode labels and implicit form:
>>> r = contract(a, ['first', 'second'], b, ['second', 'third'])
Interleaved format using integer mode labels and explicit form:
>>> r = contract(a, [1, 2], b, [2, 3], [1, 3])
Obtain information
ion the best contraction path along with the resultr:>>> r, i = contract('ij,jk', a, b, return_info=True)
Provide options for the tensor network:
>>> from cuquantum import NetworkOptions >>> n = NetworkOptions(device_id=1) >>> r = contract('ij,jk->ik', a, b, options=n)
Alternatively, the options can be provided as a dict instead of a
NetworkOptionsobject:>>> r = contract('ij,jk->ik', a, b, options={'device_id': 1})
Specify options for the optimizer:
>>> from cuquantum import OptimizerOptions, PathFinderOptions >>> p = PathFinderOptions(imbalance_factor=230, cutoff_size=8) >>> o = OptimizerOptions(path=p, seed=123) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Alternatively, the options above can be provided as a dict:
>>> r = contract('ij,jk,kl', a, b, a, optimize={'path': {'imbalance_factor': 230, 'cutoff_size': 8}, 'seed': 123})
Specify the path directly:
>>> o = OptimizerOptions(path = [(0,2), (0,1)]) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Perform elementwise multiplication \(a \odot b^T\) using the ellipsis shorthand notation:
>>> r = contract('...,...', a, b.T)
Obtain the double inner product \(a : b^T\) (Frobenius inner product for real-valued tensors) using the ellipsis shorthand notation:
>>> r = contract('...,...->', a, b.T)
Use CuPy operands. The result
ris a CuPy ndarray on the same device as the operands, anddevis any valid device ID on your system that you wish to use to store the tensors and compute the contraction:>>> import cupy >>> dev = 0 >>> with cupy.cuda.Device(dev): ... a = cupy.arange(6.).reshape(3, 2) ... b = cupy.arange(6.).reshape(2, 3) >>> r = contract('ij,jk', a, b)
Use PyTorch operands. The result
ris a PyTorch tensor on the same device (dev) as the operands:>>> import torch >>> dev = 0 >>> a = torch.arange(6., device=f'cuda:{dev}').reshape(3, 2) >>> b = torch.arange(6., device=f'cuda:{dev}').reshape(2, 3) >>> r = contract('ij,jk', a, b)