Tensor network simulator¶
Introduction¶
Starting from cuQuantum Python v24.08, we provide new APIs that enable Python users to easily leverage cuTensorNet state APIs for tensor network simulation.
These APIs are now available under the cuquantum.cutensornet.experimental
module and may be subject to change in future releases.
Please share your feedback with us on NVIDIA/cuQuantum GitHub Discussions!
The new set of APIs are centered around the NetworkState
class and are designed to support the following groups of users:
Quantum Computing Framework Users: These users can directly initialize a tensor network state from quantum circuit objects such as
cirq.Circuit
orqiskit.QuantumCircuit
via theNetworkState.from_circuit()
method.Tensor Network Framework Developers and Researchers: These users can build any state of interest by applying tensor operators, matrix product operators (MPOs), and set initial state to a matrix product state (MPS). The methods involved here include
NetworkState.apply_tensor_operator()
,NetworkState.update_tensor_operator()
,NetworkState.set_initial_mps()
,NetworkState.apply_mpo()
, andNetworkState.apply_network_operator()
. Ndarray-like objects from NumPy, CuPy, and PyTorch are all supported as input operands.
Note
The
NetworkState
class supports arbitrary state dimensions beyond regular quantum circuit states with qubits (d=2
). An example of simulating a complex state with non-uniform state dimensions can be found in the arbitrary state example.For both MPS and MPO, only open boundary condition is supported.
Users can further specify the tensor network simulation method as one of the following:
Contraction-Based Simulations: Specify
config
as aTNConfig
object.MPS-Based Simulations: Specify
config
as aMPSConfig
object, offering detailed control over truncation extents, canonical centers, SVD algorithms, and normalization options.
Once the problem is fully specified, users can take advantage of the following execution APIs to compute various properties:
NetworkState.compute_state_vector()
: Computes the final state coefficients as an N-dimensional tensor with extents matching the specified state.NetworkState.compute_amplitude()
: Computes the amplitude coefficient for a given bitstring.NetworkState.compute_batched_amplitudes()
: Computes the batched amplitude coefficients for a subset of state dimensions while others are fixed at certain states.NetworkState.compute_reduced_density_matrix()
: Computes the reduced density matrix for a subset of state dimensions, optionally fixing another subset to specific states.NetworkState.compute_expectation()
: Computes the expectation value for a given tensor network operator, which can be specified as a sum of tensor product (such as Pauli operators) or MPOs with coefficients.NetworkState.compute_sampling()
: Draws samples from the underlying state, with options to sample just a subset of all state dimensions.NetworkState.compute_norm()
: Computes the norm of the tensor network state.
Additionally, the NetworkOperator
class allows users to create a network operator object as a sum of tensor products (via NetworkOperator.append_product()
) or MPOs (via NetworkOperator.append_mpo()
) with coefficients.
This object can then interact with the NetworkState
class, enabling users to apply an MPO to the state or compute the expectation value of the operator on the state using methods like NetworkState.apply_network_operator()
and NetworkState.compute_expectation()
.
Caching feature¶
As of cuQuantum v24.08, the NetworkState
offers preliminary caching support for all execution methods with a compute_
suffix when contraction-based tensor network simulation or MPS simulation without value based truncation is used.
During the first call to these methods, the underlying cuTensorNet C object for these properties will be created, prepared, cached, and then executed to compute the final output.
On subsequent calls to the same method using compatible parameters without updating the state with NetworkState.apply_tensor_operator()
,
NetworkState.apply_mpo()
, NetworkState.set_initial_mps()
, or NetworkState.apply_network_operator()
(it’s okay to call NetworkState.update_tensor_operator()
), the cached C object will be reused to compute the final output,
thus reducing the overhead of C object creation and preparation.
Compatible parameters have different contexts for different execution methods:
For
NetworkState.compute_state_vector()
,NetworkState.compute_amplitude()
, andNetworkState.compute_norm()
, any parameters will result in using the same cached object.For
NetworkState.compute_batched_amplitudes()
, the set of state dimensions specified byfixed
must be identical while the fixed state for each dimension may differ.For
NetworkState.compute_reduced_density_matrix()
, thewhere
parameter and the set of state dimensions specified byfixed
must be identical while the fixed state for each dimension may differ.For
NetworkState.compute_expectation()
, the sameNetworkOperator
object with unchanged underlying components must be used. Providingoperators
as a string of Pauli operators or as a dictionary mapping Pauli strings to coefficients will not activate the caching mechanism.For
NetworkState.compute_sampling()
, the samemodes
parameter is required to activate the caching mechanism.
For more details, please refer to our cirq caching example and qiskit caching example.
Additionally, users can leverage the caching feature along with the NetworkState.update_tensor_operator()
method to reduce the overhead for variational workflows where the same computation needs to be performed on numerous states with identical topologies.
For more details, please refer to our variational workflow example.
MPI support¶
As of cuQuantum v24.08, the NetworkState
offers preliminary distributed parallel support for all execution methods with a compute_
suffix when contraction-based tensor network simulation is used, i.e., TNConfig
.
To activate distributed parallel execution, users must perform the following tasks:
Explicitly set the device ID to use in
cuquantum.NetworkOptions.device_id
and provide it toNetworkState
via theoptions
parameter.Explicitly create the library handle on the corresponding device using
cuquantum.cutensornet.create()
, bind an MPI communicator to the library handle usingcuquantum.cutensornet.distributed_reset_configuration()
, and provide it toNetworkState
via theoptions
parameter.
For more details, please refer to our cirq mpi sampling example and qiskit mpi sampling example.