cuquantum.Network¶
- class cuquantum.Network(subscripts, *operands, options=None)¶
-
Create a tensor network object specified as an einsum expression.
The Einstein summation convention provides an elegant way of representing many tensor network operations. This object allows the user to invest considerable effort into computing the best contraction path as well as autotuning the contraction upfront for repeated contractions over the same network topology (different input tensors, or “operands”, with the same Einstein summation expression). Also see
contract_path()
andautotune()
.For the Einstein summation expression, both the explicit and implicit forms are supported.
In the implicit form, the output indices are inferred from the summation expression and reordered lexicographically. An example is the expression
'ij,jh'
, for which the output indices are'hi'
. (This corresponds to a matrix multiplication followed by a transpose.)In the explicit form, output indices can be directly stated following the identifier
'->'
in the summation expression. An example is the expression'ij,jh->ih'
(which corresponds to a matrix multiplication).To specify an Einstein summation expression, both the subscript format (as shown above) and the “”interleaved” format are supported.
The interleaved format is an alternative way for specifying the operands and their modes as
Network(op0, modes0, op1, modes1, ..., [modes_out])
, whereopN
is the N-th operand andmodesN
is a sequence of hashable object (strings, integers, etc) representing the N-th operand’s modes.Ellipsis broadcasting is currently not supported.
Additional information on various operations on the network can be obtained by passing in a
logging.Logger
object toNetworkOptions
or by setting the appropriate options in the root logger object, which is used by default:import logging logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s', datefmt='%m-%d %H:%M:%S')
- Parameters
-
subscripts – The modes (subscripts) for summation as a comma-separated list of characters. Unicode characters are allowed in the expression thereby expanding the size of the tensor network that can be specified using the Einstein summation convention.
operands – A sequence of tensors (ndarray-like objects). The currently supported types are
numpy.ndarray
,cupy.ndarray
, andtorch.Tensor
.options – Specify options for the tensor network as a
NetworkOptions
object. Alternatively, adict
containing the parameters for theNetworkOptions
constructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptions
object.
See also
Note
In this release, only the classical Einstein summation is supported – an index (mode) must appear exactly once or twice. An index that appears twice represents an inner product on that dimension. If an index appears once, it must appear in the output.
Examples
>>> from cuquantum import Network >>> import numpy as np
Define the parameters of the tensor network:
>>> expr = 'ehl,gj,edhg,bif,d,c,k,iklj,cf,a->ba' >>> shapes = [(8, 2, 5), (5, 7), (8, 8, 2, 5), (8, 6, 3), (8,), (6,), (5,), (6, 5, 5, 7), (6, 3), (3,)]
Create the input tensors using NumPy:
>>> operands = [np.random.rand(*shape) for shape in shapes]
Create a
Network
object:>>> n = Network(expr, *operands)
Find the best contraction order:
>>> path, info = n.contract_path({'samples': 500})
Autotune the network:
>>> n.autotune(iterations=5)
Perform the contraction. The result is of the same type and on the same device as the operands:
>>> r1 = n.contract()
Reset operands to new values:
>>> operands = [i*operand for i, operand in enumerate(operands, start=1)] >>> n.reset_operands(*operands)
Get the result of the new contraction:
>>> r2 = n.contract() >>> from math import factorial >>> np.allclose(r2, factorial(len(operands))*r1) True
Finally, free network resources. If this call isn’t made, it may hinder further operations (especially if the network is large) as it causes memory leak. (To avoid having to explicitly make this call, it is recommended to use the
Network
object as a context manager.)>>> n.free()
If the operands are on the GPU, they can also be updated using in-place operations. In this case, the call to
reset_operands()
can be skipped – subsequentcontract()
calls will use the same operands (with updated contents). The following example illustrates this using CuPy operands and also demonstrates the usage of aNetwork
context (so as to skip callingfree()
):>>> import cupy as cp >>> expr = 'ehl,gj,edhg,bif,d,c,k,iklj,cf,a->ba' >>> shapes = [(8, 2, 5), (5, 7), (8, 8, 2, 5), (8, 6, 3), (8,), (6,), (5,), (6, 5, 5, 7), (6, 3), (3,)] >>> operands = [cp.random.rand(*shape) for shape in shapes] >>> >>> with Network(expr, *operands) as n: ... path, info = n.contract_path({'samples': 500}) ... n.autotune(iterations=5) ... ... # Perform the contraction ... r1 = n.contract() ... ... # Update the operands in place ... for i, operand in enumerate(operands, start=1): ... operand *= i ... ... # Perform the contraction with the updated operand values ... r2 = n.contract() ... ... # The resources used by the network are automatically released when the context ends. >>> >>> from math import factorial >>> cp.allclose(r2, factorial(len(operands))*r1) array(True)
PyTorch CPU and GPU tensors can be passed as input operands in the same fashion.
See
contract()
for more examples on specifying the Einstein summation expression as well as specifying options for the tensor network and the optimizer.Methods
- __init__(subscripts, *operands, options=None)¶
- autotune(*, iterations=3, stream=None)¶
-
Autotune the network to reduce the contraction cost.
This is an optional step that is recommended if the
Network
object is used to perform multiple contractions.- Parameters
-
iterations – The number of iterations for autotuning. See
CUTENSORNET_CONTRACTION_AUTOTUNE_MAX_ITERATIONS
.stream – Provide the CUDA stream to use for the autotuning operation. Acceptable inputs include
cudaStream_t
(as Pythonint
),cupy.cuda.Stream
, andtorch.cuda.Stream
. If a stream is not provided, the current stream will be used.
- contract(*, stream=None)¶
-
Contract the network and return the result.
- Parameters
-
stream – Provide the CUDA stream to use for the autotuning operation. Acceptable inputs include
cudaStream_t
(as Pythonint
),cupy.cuda.Stream
, andtorch.cuda.Stream
. If a stream is not provided, the current stream will be used. - Returns
-
The result is of the same type and on the same device as the operands.
- contract_path(optimize=None)¶
-
Compute the best contraction path together with any slicing that is needed to ensure that the contraction can be performed within the specified memory limit.
- Parameters
-
optimize – This parameter specifies options for path optimization as an
OptimizerOptions
object. Alternatively, a dictionary containing the parameters for theOptimizerOptions
constructor can also be provided. If not specified, the value will be set to the default-constructedOptimizerOptions
object. - Returns
-
A 2-tuple (
path
,opt_info
):path
: A sequence of pairs of operand indices representing the best contraction order in thenumpy.einsum_path()
format.opt_info
: An object of typeOptimizerInfo
containing information about the best contraction order.
- Return type
Notes
If the path is provided, the user has to set the sliced modes too if slicing is desired.
If the path or sliced modes are provided, the metrics in
OptimizerInfo
may not be correct.
- free()¶
-
Free network resources.
It is recommended that the
Network
object be used within a context, but if it is not possible then this method must be called explicitly to ensure that the network resources are properly cleaned up.
- reset_operands(*operands)¶
-
Reset the operands held by this
Network
instance.This method is not needed when the operands reside on the GPU and in-place operations are used to update the operand values.
This method will perform various checks on the new operands to make sure:
The shapes, strides, datatypes match those of the old ones.
If input tensors are on GPU, the device and alignments must match.
- Parameters
-
operands – See
Network
’s documentation.