Tensor Network APIs¶
The tensor network module, cuquantum.tensornet
, provides a Python-friendly interface for users to leverage the cuTensorNet library.
It supports NumPy, CuPy, and PyTorch ndarray-like objects, enabling functionalities such as tensor network contraction,
tensor decomposition, circuit-to-Einsum expression conversion, and a Pythonic tensor network state simulator.
The following sections introduce these features in detail.
Contraction¶
Introduction¶
The contraction APIs support ndarray-like objects from NumPy, CuPy, and PyTorch and support specification of the tensor network as an Einstein summation expression.
These APIs can be further categorized into two levels:
The “coarse-grained” level, where the user deals with Python functions like
contract()
,contract_path()
,einsum()
, andeinsum_path()
. The coarse-grained level is an abstraction layer that is typically meant for single contraction operations.The “fine-grained” level, where the interaction is through operations on a
Network
object. The fine-grained level allows the user to invest significant resources into finding an optimal contraction path and autotuning the network where repeated contractions on the same network object allow for amortization of the cost (see also Resource management).
The APIs also allow for interoperability between the cuTensorNet library and external packages. For example, the user can specify a contraction order obtained from a different package (perhaps a research project). Alternatively, the user can obtain the contraction order and the sliced modes from cuTensorNet for downstream use elsewhere.
Usage example¶
Contracting the same tensor network demonstrated in the cuTensorNet C example is as simple as:
from cuquantum.tensornet import contract
from numpy.random import rand
a = rand(96,64,64,96)
b = rand(96,64,64)
c = rand(64,96,64)
r = contract("mhkn,ukh,xuy->mxny", a, b, c)
If desired, various options can be provided for the contraction.
For PyTorch tensors, starting cuQuantum Python v23.10 the contract()
function works like a native PyTorch operator that can be
recorded in the autograd graph and generate backward-mode automatic differentiation.
See contract()
for more details and examples.
Starting cuQuantum v22.11 / cuTensorNet v2.0.0, cuTensorNet supports automatic MPI parallelism if users bind an MPI communicator to the library handle, among other requirements as outlined here. To illustrate, assuming all processes hold the same set of input tensors (on distinct GPUs) and the same network expression, this should work out of box:
from cupy.cuda.runtime import getDeviceCount
from mpi4py import MPI
from cuquantum.bindings import cutensornet as cutn
# bind comm to cuTensorNet handle
handle = cutn.create()
comm = MPI.COMM_WORLD
cutn.distributed_reset_configuration(
handle, *cutn.get_mpi_comm_pointer(comm))
# make each process run on different GPU
rank = comm.Get_rank()
device_id = rank % getDeviceCount()
cp.cuda.Device(device_id).use()
# 1. assuming input tensors a, b, and c are created on the right GPU
# 2. passing handle explicitly allows reusing it to reduce the handle creation overhead
r = contract(
"mhkn,ukh,xuy->mxny", a, b, c,
options={'device_id' : device_id, 'handle': handle}))
An end-to-end Python example of such auto-MPI usage can be found at https://github.com/NVIDIA/cuQuantum/blob/main/python/samples/tensornet/contraction/coarse/example22_mpi_auto.py.
Note
As of cuQuantum v22.11 / cuTensorNet v2.0.0, the Python wheel does not have the required MPI wrapper library included. Users need to either build it from source (included in the wheel), or use the Conda package from conda-forge instead.
Finally, for users seeking full control over the tensor network operations and parallelization, we offer fine-grained APIs as illustrated by the examples in the documentation for Network
. A complete example
illustrating parallel implementation of tensor network contraction using the fine-grained API is shown below:
from cupy.cuda.runtime import getDeviceCount
from mpi4py import MPI
import numpy as np
from cuquantum.tensornet import Network
root = 0
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
expr = 'ehl,gj,edhg,bif,d,c,k,iklj,cf,a->ba'
shapes = [(8, 2, 5), (5, 7), (8, 8, 2, 5), (8, 6, 3), (8,), (6,), (5,), (6, 5, 5, 7), (6, 3), (3,)]
# Set the operand data on root.
operands = [np.random.rand(*shape) for shape in shapes] if rank == root else None
# Broadcast the operand data.
operands = comm.bcast(operands, root)
# Assign the device for each process.
device_id = rank % getDeviceCount()
# Create network object.
network = Network(expr, *operands, options={'device_id' : device_id})
# Compute the path on all ranks with 8 samples for hyperoptimization. Force slicing to enable parallel contraction.
path, info = network.contract_path(optimize={'samples': 8, 'slicing': {'min_slices': max(16, size)}})
# Select the best path from all ranks.
opt_cost, sender = comm.allreduce(sendobj=(info.opt_cost, rank), op=MPI.MINLOC)
if rank == root:
print(f"Process {sender} has the path with the lowest FLOP count {opt_cost}.")
# Broadcast info from the sender to all other ranks.
info = comm.bcast(info, sender)
# Set path and slices.
path, info = network.contract_path(optimize={'path': info.path, 'slicing': info.slices})
# Calculate this process's share of the slices.
num_slices = info.num_slices
chunk, extra = num_slices // size, num_slices % size
slice_begin = rank * chunk + min(rank, extra)
slice_end = num_slices if rank == size - 1 else (rank + 1) * chunk + min(rank + 1, extra)
slices = range(slice_begin, slice_end)
print(f"Process {rank} is processing slice range: {slices}.")
# Contract the group of slices the process is responsible for.
result = network.contract(slices=slices)
# Sum the partial contribution from each process on root.
result = comm.reduce(sendobj=result, op=MPI.SUM, root=root)
# Check correctness.
if rank == root:
result_np = np.einsum(expr, *operands, optimize=True)
print("Does the cuQuantum parallel contraction result match the numpy.einsum result?", np.allclose(result, result_np))
This “manual” MPI Python example can be found in the NVIDIA/cuQuantum repository (here).
Call blocking behavior¶
By default, calls to the execution APIs (Network.autotune()
and Network.contract()
on the Network
object as well as the function contract()
) block and do not return
until the operation is completed. This behavior can be changed by setting NetworkOptions.blocking
and passing in the options to
Network
. When NetworkOptions.blocking
is set to 'auto'
, calls to the execution APIs will return immediately after
the operation is launched on the GPU without waiting for it to complete if the input tensors are on the device. If the input
tensors are on the host, the execution API calls will always block since the result of the contraction is a tensor that will also reside on the host.
APIs that execute on the host (such as Network.contract_path()
on the Network
object, and contract_path()
, and einsum_path()
functions) always block.
Stream semantics¶
The stream semantics depends on whether the behavior of the execution APIs is chosen to be blocking or non-blocking (see Call blocking behavior).
For blocking behavior, stream ordering is automatically handled by the cuQuantum Python high-level APIs for operations that are performed within the package. A stream can be provided for two reasons:
1. When the computation that prepares the input tensors is not already complete by the time the execution APIs are called. This is a correctness requirement for user-provided data. 2. To enable parallel computations across multiple streams if the device has sufficient resources and the current stream (which is the default) has concomitant operations. This can be done for performance reasons.
For non-blocking behavior, it is the user’s responsibility to ensure correct stream ordering between the execution API calls.
In any case, the execution APIs are launched on the provided stream.
Resource management¶
An important aspect of the fine-grained, stateful object APIs (e.g. Network
) is resource management. We need to make sure that the internal resources, including library resources, memory resources, and user-provided input operands are properly managed throughout the object’s lifetime safely. As such, there are caveats that the users should be aware of, due to their impact on the memory watermark.
The stateful object APIs allow users to prepare an object and reuse it for multiple contractions or gradient calculations to amortize the preparation cost. Depending on the specific problem, investment in preparations that lead to shorter execution time may be the ideal solution. During the preparation step, an object would inevitably hold reference to device memory for later reuse. However, such problems oftentimes imply high memory usage, making it impossible to allow holding multiple objects at the same time. Interleaving contractions of multiple large tensor networks is an example of this.
To address this use case, starting cuQuantum Python v24.03 two new features are added:
Every execution method now accepts a
release_workspace
option. When this option is set toTrue
(default isFalse
), the memory needed to perform an operation is freed before the method returns, making this memory available for other tasks. The next time the same (or different) method is called, memory is allocated on demand. Therefore, there is a small overhead associated withrelease_workspace=True
, as allocating/deallocating memory could take time depending on the implementation of the underlying memory allocator (see next section); however, making multipleNetwork
objects coexist becomes possible, see, e.g., the example6_resource_mgmt_contraction.py sample.The
reset_operands()
method now accepts settingoperands=None
to free the internal reference to the input operands after the execution. This reduces potential memory contention and thereby allows contracting multiple networks with large input tensors in an interleaved fashion. In such cases, before a subsequent execution on the sameNetwork
object is called, thereset_operands()
method should be called again with the new operands, see, e.g., the example8_reset_operand_none.py sample.
These two features, used separately or jointly as the problem requires, make it possible to prepare and use a large number of Network
objects when the device memory available is not enough to fit all problems at once.
External memory management¶
Starting cuQuantum Python v22.03, we support an EMM-like interface as proposed and supported by Numba for users to set their Python
mempool. Users set the option NetworkOptions.allocator
to a Python object complying with the cuquantum.BaseCUDAMemoryManager
protocol, and pass the options to the pythonic APIs like contract()
or Network
. Temporary memory allocations will then
be done through this interface. (Internally, we use the same interface to use CuPy or PyTorch’s mempool depending on the input tensor
operands.)
Note
cuQuantum’s BaseCUDAMemoryManager
protocol is slightly different from Numba’s EMM interface
(numba.cuda.BaseCUDAMemoryManager
), but duck typing with an existing EMM instance (not type!) at runtime
should be possible.
Decomposition¶
Introduction¶
Decomposition methods such as QR and SVD are prevalent in tensor network algorithms, as they allow one to exploit the sparsity of the network and thus reduce the computational cost.
Starting with cuQuantum Python v23.03, we provide these functionalities at both the tensor and tensor network levels.
The tensor level decomposition routines are implemented inside the module cuquantum.tensornet.tensor
with the following features:
QR decomposition can be performed using
cuquantum.tensornet.tensor.decompose()
withcuquantum.tensornet.tensor.QRMethod
.Both exact and truncated SVD can be performed using
cuquantum.tensornet.tensor.decompose()
withcuquantum.tensornet.tensor.SVDMethod
.Decomposition options can be specified by
cuquantum.tensornet.tensor.DecompositionOptions
.
As of cuQuantum Python v23.03, the tensor network level decomposition routines are implemented in the experimental subpackage cuquantum.tensornet.experimental
with the main API cuquantum.tensornet.experimental.contract_decompose()
.
Given an input tensor network, this function can perform a full contraction followed by a QR or SVD decomposition. This can be specified via cuquantum.tensornet.experimental.ContractDecomposeAlgorithm
.
If the contract and decompose problem amounts to a ternary-operand gate split problem, commonly seen in quantum circuit simulation (see Gate Split Algorithm for details),
the user can potentially leverage QR decompositions to speed up the execution of contraction and SVD. This can be achieved by setting both cuquantum.tensornet.experimental.ContractDecomposeAlgorithm.qr_method
and cuquantum.tensornet.experimental.ContractDecomposeAlgorithm.svd_method
.
Note
The APIs inside cuquantum.tensornet.experimental
are subject to change and may be integrated into the main package cuquantum.tensornet
in a future release.
Users are encouraged to leave feedback on NVIDIA/cuQuantum GitHub Discussions.
Usage example¶
import cupy
from cuquantum.tensornet import contract
from cuquantum.tensornet.tensor import decompose
from cuquantum.tensornet.experimental import contract_decompose
# create a random rank-4 tensor
a = cupy.random.random((2,2,2,2)) + cupy.random.random((2,2,2,2)) * 1j
# perform QR decomposition such that A[i,j,k,l] = \sum_{x} Q[i,x,k] R[x,j,l]
q, r = decompose('ijkl->ixk,xjl', a) # QR by default
# check the unitary property of q
identity = contract('ixk,iyk->xy', q, q.conj())
identity_reference = cupy.eye(identity.shape[0])
assert cupy.allclose(identity, identity_reference)
# check if the contraction of the decomposition outputs yields the input
a_reference = contract('ixk,xjl->ijkl', q, r)
assert cupy.allclose(a, a_reference)
More examples on tensor decompositions are available in our sample directory to demonstrate the use of QR and SVD in different settings.
For tensor network decompositions, please refer to this directory for more detailed examples. We have also provided a Jupyter notebook to demonstrate how to easily implement basic MPS algorithms using these new APIs.
CircuitToEinsum converter¶
Introduction¶
Starting cuQuantum Python v22.07, we provide a CircuitToEinsum
converter that takes either a qiskit.QuantumCircuit
or a
cirq.Circuit
and generates the corresponding tensor network contraction for the target operation. The goal of the converter is to allow Qiskit and Cirq users to easily
explore the functionalities of the cuTensorNet library. As mentioned in the tensor network introduction,
quantum circuits can be viewed as tensor networks. For any quantum circuit, CircuitToEinsum
can construct the corresponding tensor network
to compute various quantities of interest. The output tensor network is returned as an Einstein summation expression with tensor operands.
We support the following operations:
state_vector()
: The contraction of this Einstein summation expression yields the final state coefficients as an N-dimensional tensor where N is the number of qubits in the circuit. The mode labels of the tensor correspond to theCircuitToEinsum.qubits
.
amplitude()
: The contraction of this Einstein summation expression yields the amplitude coefficient for a given bitstring.
batched_amplitudes()
: The contraction of this Einstein summation expression yields the amplitude coefficients for a subset of qubits while the others are fixed at certain states.
reduced_density_matrix()
: The contraction of this Einstein summation expression yields the reduced density matrix for a subset of qubits, optionally with another subset of qubits set to a fixed state.
expectation()
: The contraction of this Einstein summation expression yields the expectation value for a given Pauli string.
The CircuitToEinsum
class also allows user to specify a desired tensor backend (cupy
, torch
, numpy
) via the backend
argument when constructing the converter object.
The returned Einstein summation expression and tensor operands can then directly serve as the input arguments for cuquantum.contract()
or the corresponding backend’s einsum
function.
Usage example¶
import cirq
import cupy
from cuquantum.tensornet import contract, CircuitToEinsum
# create a random cirq.Circuit
circuit = cirq.testing.random_circuit(qubits=4, n_moments=4, op_density=0.9, random_state=1)
# same task can be achieved with qiskit.circuit.random.random_circuit
# construct the CircuitToEinsum converter targeting double precision and cupy operands
converter = CircuitToEinsum(circuit, dtype='complex128', backend='cupy')
# generate the Einstein summation expression and tensor operands for computing the amplitude coefficient of bitstring 0000
expression, operands = converter.amplitude(bitstring='0000')
assert all([isinstance(op, cupy.ndarray) for op in operands])
# contract the network to compute the amplitude
amplitude = contract(expression, *operands)
amplitude_cupy = cupy.einsum(expression, *operands)
assert cupy.allclose(amplitude, amplitude_cupy)
Multiple Jupyter notebooks are available for Cirq and Qiskit users to easily build up their tensor network based simulations using cuTensorNet.
Tensor network simulator¶
Introduction¶
Starting from cuQuantum Python v24.08, we provide new APIs that enable Python users to easily leverage cuTensorNet tensor network state APIs for tensor network simulation.
These APIs are now available under the cuquantum.tensornet.experimental
module and may be subject to change in future releases.
Please share your feedback with us on NVIDIA/cuQuantum GitHub Discussions!
The new set of APIs are centered around the NetworkState
class and are designed to support the following groups of users:
Quantum Computing Framework Users: These users can directly initialize a tensor network state from quantum circuit objects such as
cirq.Circuit
orqiskit.QuantumCircuit
via theNetworkState.from_circuit()
method.Tensor Network Framework Developers and Researchers: These users can build any state of interest by applying tensor operators, matrix product operators (MPOs), and set initial state to a matrix product state (MPS). The methods involved here include
NetworkState.apply_tensor_operator()
,NetworkState.update_tensor_operator()
,NetworkState.set_initial_mps()
,NetworkState.apply_mpo()
, andNetworkState.apply_network_operator()
. Ndarray-like objects from NumPy, CuPy, and PyTorch are all supported as input operands.
Note
The
NetworkState
class supports arbitrary state dimensions beyond regular quantum circuit states with qubits (d=2
). An example of simulating a complex state with non-uniform state dimensions can be found in the arbitrary state example.For both MPS and MPO, only open boundary condition is supported.
Users can further specify the tensor network simulation method as one of the following:
Contraction-Based Simulations: Specify
config
as aTNConfig
object.MPS-Based Simulations: Specify
config
as aMPSConfig
object, offering detailed control over truncation extents, canonical centers, SVD algorithms, and normalization options.
Once the problem is fully specified, users can take advantage of the following execution APIs to compute various properties:
NetworkState.compute_state_vector()
: Computes the final state coefficients as an N-dimensional tensor with extents matching the specified state.NetworkState.compute_amplitude()
: Computes the amplitude coefficient for a given bitstring.NetworkState.compute_batched_amplitudes()
: Computes the batched amplitude coefficients for a subset of state dimensions while others are fixed at certain states.NetworkState.compute_reduced_density_matrix()
: Computes the reduced density matrix for a subset of state dimensions, optionally fixing another subset to specific states.NetworkState.compute_expectation()
: Computes the expectation value for a given tensor network operator, which can be specified as a sum of tensor product (such as Pauli operators) or MPOs with coefficients.NetworkState.compute_sampling()
: Draws samples from the underlying state, with options to sample just a subset of all state dimensions.NetworkState.compute_norm()
: Computes the norm of the tensor network state.
Additionally, the NetworkOperator
class allows users to create a network operator object as a sum of tensor products (via NetworkOperator.append_product()
) or MPOs (via NetworkOperator.append_mpo()
) with coefficients.
This object can then interact with the NetworkState
class, enabling users to apply an MPO to the state or compute the expectation value of the operator on the state using methods like NetworkState.apply_network_operator()
and NetworkState.compute_expectation()
.
Caching feature¶
As of cuQuantum v24.08, the NetworkState
offers preliminary caching support for all execution methods with a compute_
suffix when contraction-based tensor network simulation or MPS simulation without value based truncation is used.
During the first call to these methods, the underlying cuTensorNet C object for these properties will be created, prepared, cached, and then executed to compute the final output.
On subsequent calls to the same method using compatible parameters without updating the state with NetworkState.apply_tensor_operator()
,
NetworkState.apply_mpo()
, NetworkState.set_initial_mps()
, or NetworkState.apply_network_operator()
(it’s okay to call NetworkState.update_tensor_operator()
), the cached C object will be reused to compute the final output,
thus reducing the overhead of C object creation and preparation.
Compatible parameters have different contexts for different execution methods:
For
NetworkState.compute_state_vector()
,NetworkState.compute_amplitude()
, andNetworkState.compute_norm()
, any parameters will result in using the same cached object.For
NetworkState.compute_batched_amplitudes()
, the set of state dimensions specified byfixed
must be identical while the fixed state for each dimension may differ.For
NetworkState.compute_reduced_density_matrix()
, thewhere
parameter and the set of state dimensions specified byfixed
must be identical while the fixed state for each dimension may differ.For
NetworkState.compute_expectation()
, the sameNetworkOperator
object with unchanged underlying components must be used. Providingoperators
as a string of Pauli operators or as a dictionary mapping Pauli strings to coefficients will not activate the caching mechanism.For
NetworkState.compute_sampling()
, the samemodes
parameter is required to activate the caching mechanism.
For more details, please refer to our cirq caching example and qiskit caching example.
Additionally, users can leverage the caching feature along with the NetworkState.update_tensor_operator()
method to reduce the overhead for variational workflows where the same computation needs to be performed on numerous states with identical topologies.
For more details, please refer to our variational workflow example.
MPI support¶
As of cuQuantum v24.08, the NetworkState
offers preliminary distributed parallel support for all execution methods with a compute_
suffix when contraction-based tensor network simulation is used, i.e., TNConfig
.
To activate distributed parallel execution, users must perform the following tasks:
Explicitly set the device ID to use in
cuquantum.tensornet.NetworkOptions.device_id
and provide it toNetworkState
via theoptions
parameter.Explicitly create the library handle on the corresponding device using
cuquantum.bindings.cutensornet.create()
, bind an MPI communicator to the library handle usingcuquantum.bindings.cutensornet.distributed_reset_configuration()
, and provide it toNetworkState
via theoptions
parameter.
For more details, please refer to our cirq mpi sampling example and qiskit mpi sampling example.
API reference¶
Objects¶
|
Create a tensor network object specified as an Einstein summation expression. |
|
Create a converter object that can generate Einstein summation expressions and tensor operands for a given circuit. |
|
A data class for providing options to the |
|
A data class for capturing optimizer information. |
|
A data class for providing options to the cuTensorNet optimizer. |
|
A data class for capturing the path finder options. |
|
A data class for capturing the reconfiguration options. |
|
A data class for capturing the slicer options. |
Python functions¶
|
Evaluate the Einstein summation convention on the operands. |
|
Evaluate the "best" contraction order by allowing the creation of intermediate tensors. |
|
A drop-in replacement of |
|
A drop-in replacement of |
|
Simple helper to get the address to and size of a |
Tensor submodule¶
|
Perform the tensor decomposition of the operand based on the expression described by |
|
A data class for providing options to the |
|
A data class for providing QR options to the |
|
A data class for holding information regarding SVD truncation at runtime. |
|
A data class for providing SVD options to the |
Experimental submodule¶
|
Evaluate the compound expression for contraction and decomposition on the input operands. |
|
Create an empty tensor network state. |
|
Create a tensor network operator object. |
|
A data class for specifying the algorithm to use for the contract and decompose operations. |
|
A data class for capturing contract-decompose information. |
|
A data class for MPS based tensor network simulation configuration that can be provided to the |
|
A data class for contraction based tensor network simulation configuration that can be provided to the |