cuquantum.tensornet.experimental.NetworkState¶
- class cuquantum.tensornet.experimental.NetworkState(state_mode_extents, *, dtype='complex128', config=None, state_labels=None, options=None)[source]¶
Create an empty tensor network state.
- Parameters
state_mode_extents – A sequence of integers specifying the extents for all state modes.
dtype –
A string specifying the datatype for the network state, currently supports the following data types:
'float32''float64''complex64''complex128'(default)
config –
The simulation configuration for the state. It can be:
state_labels – Optional, a sequence of different labels corresponding to each state dimension. If provided, users have the option to provide a sequence of these labels as the input arguments for the following APIs including
apply_tensor_operator(),apply_mpo(),compute_batched_amplitudes(),compute_reduced_density_matrix()andcompute_sampling(). See the docstring for each of these APIs for more details.options – Specify options for the state computation as a
NetworkOptionsobject. Alternatively, adictcontaining the parameters for theNetworkOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptionsobject.
Notes
Currently
NetworkStateonly supports pure state representation.If users wish to use a different device than the default current device, it must be explicitly specified via
NetworkOptions.device_id.For MPS simulation, currently only open boundary condition is supported.
Examples
In this example, we aim to directly perform simulation on a quantum circuit instance using tensor network contraction method.
>>> from cuquantum.cutensornet.experimental import NetworkState, TNConfig >>> import cirq
Define a random cirq.Circuit, note that qiskit.QuantumCircuit is supported as well using the same API call
>>> n_qubits = 4 >>> n_moments = 4 >>> op_density = 0.9 >>> circuit = cirq.testing.random_circuit(n_qubits, n_moments, op_density, random_state=2024)
Use tensor network contraction as the simulation method
>>> config = TNConfig(num_hyper_samples=4)
Create the network state object via
from_circuit()method:>>> state = NetworkState.from_circuit(circuit, dtype='complex128', backend='cupy', config=config)
Compute the amplitude for bitstring 0000
>>> amplitude = state.compute_amplitude('0000')
Compute the expectation for a series of Pauli strings with coefficients
>>> pauli_strings = {'IXIX': 0.4, 'IZIZ': 0.1} >>> expec = state.compute_expectation(pauli_strings)
Compute the reduced density matrix for the first two qubits. Since the backend is specified to
cupy, the returned rdm operand will be cupy.ndarray.>>> where = (0, 1) >>> rdm = state.compute_reduced_density_matrix(where) >>> print(f"RDM shape for {where}: {rdm.shape}") RDM shape for (0, 1): (2, 2, 2, 2)
Draw 1000 samples from the state
>>> shots = 1000 >>> samples = state.compute_sampling(shots)
Finally, free network state resources. If this call isn’t made, it may hinder further operations (especially if the network state is large) since the memory will be released only when the object goes out of scope. (To avoid having to explicitly make this call, it is recommended to use the
NetworkStateobject as a context manager.)>>> state.free()
In addition to initializing the state from a circuit instance, users can construct the state by sequentially applying tensor operators with
apply_tensor_operator()and matrix product operators (MPOs) withapply_mpo()orapply_network_operator(). Alternatively, simulations can leverage exact or approximate matrix product state (MPS) method by specifingoptionsas anMPSConfiginstance. More detailed examples can be found in our NetworkState examples directory.Methods
- __init__(state_mode_extents, *, dtype='complex128', config=None, state_labels=None, options=None)[source]¶
- apply_general_tensor_channel(modes, operands, *, stream=None)[source]¶
Apply a noise channel to the MPS network state. The noise operators may be non-unitary. For a more efficient unitary tensor channel application, see
NetworkState.apply_unitary_tensor_channel().- Parameters
modes – A sequence of integers denoting the modes where the tensor operator acts on. If
state_labelshas been provided during initialization,modescan also be provided as a sequence of labels.operands – A sequence of ndarray-like objects for the tensor operators defining the channel. The modes of the operand is expected to be ordered as
ABC...abc..., whereABC...denotes output bra modes andabc...denotes input ket modes corresponding tomodesstream – Provide the CUDA stream to use for applying the tensor operator (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
- Returns
An integer
channel_idspecifying the location of the general channel.
Notes
This method requires the input channel to be trace-preserving. Supplying a non-trace-preserving channel may lead to unexpected results.
As of cuTensorNet v2.7.0, this method only supports MPS simulation configured with
gauge_option="free"(default).For MPS simulation, the size of
modesshall be restricted to no larger than 2 (two-body operator).The
channel_idcannot be used to update the channel usingNetworkState.update_tensor_operator().
- apply_mpo(modes, mpo_tensors, *, immutable=False, adjoint=False, unitary=False, stream=None)[source]¶
Apply an MPO operator specified by
mpo_tensorsandmodesto the network state.- Parameters
modes – A sequence of integers specifying each mode where the MPO acts on. If
state_labelshas been provided during initialization,modescan also be provided as a sequence of labels.mpo_tensors – A sequence of tensors (ndarray-like objects) for each MPO operand. The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor. The mode of each operand is expected to follow the order ofpknbwherepdenotes the mode connecting to the previous MPO tensor,ndenotes the mode connecting to the next MPO tensor,kdenotes the ket mode andbdenotes the bra mode. Note that currently only MPO with open boundary condition is supported, thereforepandnmode should not be present in the first and last MPO tensor respectively. Note that the relative order of bra and ket modes here differs from that ofoperandinapply_tensor_operator().immutable – Whether the full MPO is immutable (default
False).adjoint – Whether the full MPO should be applied in its adjoint form (default
False).unitary – Whether the full MPO is unitary (default
False).stream – Provide the CUDA stream to use for appending MPO (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
- Returns
An integer
network_idspecifying the location of the MPO.
- apply_network_operator(network_operator, *, immutable=False, adjoint=False, unitary=False)[source]¶
Apply a network operator to the network state.
- Parameters
network_operator – A
NetworkOperatorobject for the input network operator. Must contain only one MPO term or one tensor product term.immutable – Whether the network operator is immutable (default
False).adjoint – Whether the network operator should be applied in its adjoint form (default
False).unitary – Whether the network operator is unitary (default
False).
- Returns
An integer
network_idspecifying the location of the network operator.
- apply_tensor_operator(modes, operand, *, control_modes=None, control_values=None, immutable=False, adjoint=False, unitary=False, stream=None)[source]¶
Apply a tensor operator to the network state.
- Parameters
modes – A sequence of integers denoting the modes where the tensor operator acts on. If
state_labelshas been provided during initialization,modescan also be provided as a sequence of labels.operand – A ndarray-like object for the tensor operator. The modes of the operand is expected to be ordered as
ABC...abc..., whereABC...denotes output bra modes andabc...denotes input ket modes corresponding tomodescontrol_modes – A sequence of integers denotes the modes where control operation is acted on (default no control modes). If
state_labelshas been provided during initialization,control_modescan also be provided as a sequence of labels.control_values – A sequence of integers specifying the control values corresponding to
control_modes. Ifcontrol_modesare specified andcontrol_valuesare not provided, control values for all control modes will be set as 1.immutable – Whether the operator is immutable (default
False).adjoint – Whether the operator should be applied in its adjoint form (default
False).unitary – Whether the operator is unitary (default
False).stream – Provide the CUDA stream to use for applying the tensor operator (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
- Returns
An integer
tensor_idspecifying the location of the input operator.
Notes
For MPS simulation, the size of
modesshall be restricted to no larger than 2 (two-body operator).For controlled tensor operators, this method currently only supports immutable operators.
- apply_unitary_tensor_channel(modes, operands, probabilities, *, stream=None)[source]¶
Apply a unitary tensor channel to the network state. For an error channel with non-unitary operators, see
NetworkState.apply_general_tensor_channel().- Parameters
modes – A sequence of integers denoting the modes where the tensor operator acts on. If
state_labelshas been provided during initialization,modescan also be provided as a sequence of labels.operands – A sequence of ndarray-like objects for the unitary tensor operators defining the unitary channel. The modes of the operand is expected to be ordered as
ABC...abc..., whereABC...denotes output bra modes andabc...denotes input ket modes corresponding tomodesprobabilities – A sequence of positive floats representing the probabilities of each operand.
stream – Provide the CUDA stream to use for applying the tensor operator (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
- Returns
An integer
channel_idspecifying the location of the unitary channel.
Notes
For MPS simulation, the size of
modesshall be restricted to no larger than 2 (two-body operator).
- compute_amplitude(bitstring, *, return_norm=False, stream=None, release_workspace=False)[source]¶
Compute the probability amplitude of a bitstring.
- Parameters
bitstring – A sequence of integers specifying the desired measured state dimension.
return_norm – If true, the squared norm of the state will also be returned.
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
If
return_normisFalse, a scalar for the bitstring amplitude; otherwise, a 2-tuple consisting of the bitstring of the amplitude and a scalar for the squared norm of the state, i.e, inner product of bra and ket state.
- compute_batched_amplitudes(fixed, *, return_norm=False, stream=None, release_workspace=False)[source]¶
Compute the batched amplitudes for a given slice.
- Parameters
fixed – A dictionary mapping a subset of state dimensions to correponding fixed states. If
state_labelshas been provided during initialization,fixedcan also be provided as a dictionary mapping a subset of labels to corresponding fixed states.return_norm – If true, the squared norm of the state will also be returned.
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
If
return_normisFalse, An ndarray-like object as batched amplitudes. The package and storage location of the ndarray will be the same as the operands provided inapply_tensor_operator(),apply_mpo()andset_initial_mps(); otherwise, a 2-tuple consisting of the batched amplitudes and a scalar for the squared norm of the state, i.e, inner product of bra and ket state.
- compute_expectation(operators, *, return_norm=False, stream=None, release_workspace=False)[source]¶
Compute the expectation value (not normalized) for the given tensor network operator.
- Parameters
operators –
The
NetworkOperatoroperator object to compute expectation value on. If the underlying state dimensions are all 2 (qubits), it can also be:A single pauli string specifying the pauli operator for each qubit.
A dictionary mapping each single pauli string to corresponding coefficient.
return_norm – If true, the squared norm of the state will also be returned.
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
If
return_normisFalse, a scalar for the total expectation value; otherwise, a 2-tuple consisting of the total expectation value and a scalar for the squared norm of the state, i.e, inner product of bra and ket state.
Note
If user wishes to perform expectation value computation on the same operator multiple times, it is recommended to explicitly provide a
NetworkOperatorobject for optimal performance. For detailed examples, please see or variational expectation example.For pauli operator expectation value computations, this method does not take advantage of lightcone simplification optimization. If user wishes to compute the expectation value on a pauli string operator with many identities in it, consider either using the
compute_reduced_density_matrix()method or explicitly construct theNetworkOperatorobject withNetworkOperator.append_product()for optimal performance.
- compute_output_state(*, stream=None, release_workspace=False, release_operators=False)[source]¶
Compute the final output state for the underlying network state object. This method currently is only valid for MPS based simulation.
- Parameters
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.release_operators – A value of
Truewill release the reference of all underlying tensor operators andNetworkOperatorobjects. The previoustensor_idreturned byapply_tensor_operator(),apply_network_operator()andapply_mpo()will be invalid. If the output state has already been computed, which is an intermediate step in othercompute_xxxmethods, the output state will be cached and returned directly. Thus passingrelease_operators=Truecan be used to reset the underlyingNetworkStateobject.
- Returns
When MPS simulation when is specified using the
optionsargument during object initialization, a sequence of operands representing the underlying MPS state will be returned. The modes of each MPS operand are expected to follow the order ofpknwherepdenotes the mode connecting to the previous MPS tensor,kdenotes the ket mode andndenotes the mode connecting to the next MPS tensor. Note thatpandnmode should not be present in the first and last MPS tensor respectively.
- compute_reduced_density_matrix(where, *, fixed=mappingproxy({}), stream=None, release_workspace=False)[source]¶
Compute the reduced density matrix for the given marginal and fixed modes.
- Parameters
where – A sequence of integers for the target modes. If
state_labelshas been provided during initialization,wherecan also be provided as a sequence of labels.fixed – A dictionary mapping a subset of fixed modes to the fixed value. If
state_labelshas been provided during initialization,fixedcan also be provided as a dictionary mapping labels to the corresponding fixed values.stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
An ndarray-like object as the reduced density matrix. The tensor will follow the modes of
AB...ab...whereAB...andab...represents the corresponding output and input marginal modes.
- compute_sampling(nshots, *, modes=None, seed=None, stream=None, release_workspace=False)[source]¶
Perform sampling on the given modes.
- Parameters
nshots – The number of samples to collect.
modes – The target modes to sample on. If not provided, will sample all modes. If
state_labelshas been provided during initialization,modescan also be provided as a sequence of labels.seed – A positive integer denoting the random seed to use for generating the samples. If not provided, the generator will continue from the previous seed state or from an unseeded state if no seed was previously set.
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
A dictionary mapping the bitstring to the corresponding count.
- compute_state_vector(*, return_norm=False, stream=None, release_workspace=False)[source]¶
Compute the state vector.
- Parameters
return_norm – If true, the squared norm of the state will also be returned.
stream – Provide the CUDA stream to use for the computation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.release_workspace – A value of
Truespecifies that the state object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the state object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different) execution API such ascompute_sampling(),compute_reduced_density_matrix(),compute_amplitude(),compute_batched_amplitudes(), orcompute_expectation(), but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.
- Returns
If
return_normisFalse, An ndarray-like object as the state vector. The package and storage location of the ndarray will be the same as the operands provided inapply_tensor_operator(),apply_mpo()andset_initial_mps(); otherwise, a 2-tuple consisting of the state vector and a scalar for the squared norm of the state, i.e, inner product of bra and ket state.
- free()[source]¶
Free state resources.
It is recommended that the
NetworkStateobject can be used within a context, but if it is not possible then this method must be called explicitly to ensure that the state resources are properly cleaned up.
- classmethod from_circuit(circuit, *, dtype='complex128', backend='cupy', config=None, options=None, stream=None)[source]¶
Create a state object from the given circuit.
- Parameters
circuit – A fully parameterized
cirq.Circuitorqiskit.QuantumCircuitobject.dtype –
A string specifying the datatype for the tensor network, currently supports the following data types:
'complex64''complex128'(default)
backend – The backend for all output tensor operands. If not specified,
cupyis used.config –
The simulation configuration for the state. It can be:
options – Specify options for the computation as a
NetworkOptionsobject. Alternatively, adictcontaining the parameters for theNetworkOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptionsobject.stream – Provide the CUDA stream to use for state initialization, which is needed for stream-ordered operations such as allocating memory. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
Note
When parsing gates from the circuit object, all gate operands are assumed to be unitary. In the rare case where the target circuit object contains customized non-unitary gates, users are encouraged to use
apply_tensor_operator()to construct theNetworkStateobject.
- classmethod from_converter(converter, *, config=None, options=None, stream=None)[source]¶
Create a
NetworkStateobject from the givencuquantum.CircuitToEinsumconverter.- Parameters
converter – A
cuquantum.CircuitToEinsumobject.config –
The simulation configuration for the state simulator. It can be:
options – Specify options for the state computation as a
NetworkOptionsobject. Alternatively, adictcontaining the parameters for theNetworkOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedNetworkOptionsobject.stream – Provide the CUDA stream to use for state initialization, which is needed for stream-ordered operations such as allocating memory. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
- set_initial_mps(mps_tensors, *, stream=None)[source]¶
Set the initial state to a non-vacuum state in the MPS form.
- Parameters
mps_tensors – A sequence of tensors (ndarray-like objects) for each MPS operand. The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor. The modes of each operand are expected to follow the order ofpknwherepdenotes the mode connecting to the previous MPS tensor,kdenotes the ket mode andndenotes the mode connecting to the next MPS tensor. Note that this method currently only support open boundary condition, andpandnmode should thus be dropped in the first and last MPS tensor respectively.stream – Provide the CUDA stream to use for setting the initial state to the specified MPS (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.
Note
This API simply sets the initial state to the provided MPS and does not alter the nature of the simulation method, which is provided via the
optionsparameter during initialization.
- update_tensor_operator(tensor_id, operand, *, unitary=False, stream=None)[source]¶
Update a tensor operator in the state.
- Parameters
tensor_id – An integer specifing the tensor id assigned in
NetworkState.apply_tensor_operator().operand – A ndarray-like object for the tensor operator. The operand is expected to follow the same mode ordering, data type and strides as the original operand.
unitary – Whether the operator is unitary (default
False).stream – Provide the CUDA stream to use for updating tensor operand (this is used to copy the operands to the GPU if they are provided on the CPU). Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream will be used.