cuquantum.contract¶
- cuquantum.contract(subscripts, *operands, options=None, optimize=None, stream=None, return_info=False)¶
-
Evaluate the Einstein summation convention on the operands.
Explicit as well as implicit form is supported for the Einstein summation expression. In addition to the subscript format, the “interleaved” format is also supported as a means of specifying the operands and their modes. See
Networkfor more detail on the types of operands as well as for examples.- Parameters
-
subscripts – The modes (subscripts) for summation as a comma-separated list of characters. Unicode characters are allowed in the expression thereby expanding the size of the tensor network that can be specified using the Einstein summation convention.
operands – A sequence of tensors (ndarray-like objects). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.options – Specify options for the tensor network as a
NetworkOptionsobject. Alternatively, adictcontaining the parameters for theNetworkOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptionsobject.optimize – This parameter specifies options for path optimization as an
OptimizerOptionsobject. Alternatively, a dictionary containing the parameters for theOptimizerOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedOptimizerOptionsobject.stream – Provide the CUDA stream to use for the autotuning operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.return_info – If true, information about the best contraction order will also be returned.
- Returns
-
If
return_infoisFalse, the output tensor (ndarray-like object) of the same type and on the same device as the operands containing the result of the contraction; otherwise, a 2-tuple consisting of the output tensor and anOptimizerInfoobject that contains information about the best contraction order etc.
Note
It is encouraged for users to maintain the library handle themselves so as to reduce the context initialization time:
from cuquantum import cutensornet, NetworkOptions, contract handle = cutensornet.create() network_opts = NetworkOptions(handle=handle, ...) out = contract(..., options=network_opts, ...) # ... the same handle can be reused for further calls ... # when it's done, remember to destroy the handle cutensornet.destroy(handle)
Examples
Use NumPy operands:
>>> from cuquantum import contract >>> import numpy as np >>> a = np.ones((3,2)) >>> b = np.ones((2,3))
Perform matrix multiplication in the explicit form. The result
ris a NumPy ndarray (with the computation performed on the GPU):>>> r = contract('ij,jk->ik', a, b)
Implicit form:
>>> r = contract('ij,jk', a, b)
Interleaved format using characters for modes:
>>> r = contract(a, ['i', 'j'], b, ['j', 'k'], ['i', 'k'], return_info=True)
Interleaved format using string labels for modes, using implicit form:
>>> r = contract(a, ['first', 'second'], b, ['second', 'third'])
Interleaved format using integer modes, using explicit form:
>>> r = contract(a, [1, 2], b, [2, 3], [1, 3])
Obtain information
ion the best contraction path along with the resultr:>>> r, i = contract('ij,jk', a, b, return_info=True)
Provide options for the tensor network:
>>> from cuquantum import NetworkOptions >>> n = NetworkOptions(device_id=1) >>> r = contract('ij,jk->ik', a, b, options=n)
Alternatively, the options can be provided as a dict instead of a
NetworkOptionsobject:>>> r = contract('ij,jk->ik', a, b, options={'device_id': 1})
Specify options for the optimizer:
>>> from cuquantum import OptimizerOptions, PathFinderOptions >>> p = PathFinderOptions(imbalance_factor=230, cutoff_size=8) >>> o = OptimizerOptions(path=p, seed=123) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Alternatively, the options above can be provided as a dict:
>>> r = contract('ij,jk,kl', a, b, a, optimize={'path': {'imbalance_factor': 230, 'cutoff_size': 8}, 'seed': 123})
Specify the path directly:
>>> o = OptimizerOptions(path = [(0,2), (0,1)]) >>> r = contract('ij,jk,kl', a, b, a, optimize=o)
Use CuPy operands. The result
ris a CuPy ndarray on the same device as the operands, anddevis any valid device ID on your system that you wish to use to store the tensors and compute the contraction:>>> import cupy >>> dev = 0 >>> with cupy.cuda.Device(dev): ... a = cupy.ones((3,2)) ... b = cupy.ones((2,3)) >>> r = contract('ij,jk', a, b)
Use PyTorch operands. The result
ris a PyTorch tensor on the same device (dev) as the operands:>>> import torch >>> dev = 0 >>> a = torch.ones((3,2), device=f'cuda:{dev}') >>> b = torch.ones((2,3), device=f'cuda:{dev}') >>> r = contract('ij,jk', a, b)