DensePureState#

class cuquantum.densitymat.DensePureState(ctx, hilbert_space_dims, batch_size, dtype)[source]#

Pure state in dense (state-vector) representation.

A storage buffer needs to be attached via the attach_storage() method or allocated via the allocate_storage() method. The appropriate size for the storage buffer as well as information on the storage layout is available in the local_info attribute.

Parameters:
  • ctx – The execution context, which contains information on device ID, logging and blocking/non-blocking execution.

  • hilbert_space_dims – A tuple of the local Hilbert space dimensions.

  • batch_size – Batch dimension of the state.

  • dtype – Numeric data type of the state’s coefficients.

Examples

>>> import cupy as cp
>>> from cuquantum.densitymat import WorkStream, DensePureState

To create a DensePureState of batch size 1 and double-precision complex data type, we need to first initialize it and then attach the storage buffer through the attach_storage() method as follows

>>> ctx = WorkStream(stream=cp.cuda.Stream())
>>> hilbert_space_dims = (2, 2, 2)
>>> rho = DensePureState(ctx, hilbert_space_dims, 1, "complex128")
>>> rho.attach_storage(cp.zeros(rho.storage_size, dtype=rho.dtype))

Methods

__init__(
ctx: WorkStream,
hilbert_space_dims: Sequence[int],
batch_size: int,
dtype: str,
) None[source]#

Initialize a pure state in dense (state-vector) representation.

allocate_storage() None[source]#

Allocate an appropriately sized data buffer and attach it to the state.

attach_storage(data: ndarray) None[source]#

Attach a data buffer to the state.

Parameters:

data – The data buffer to be attached to the state.

Note

The data buffer needs to match the hilbert space dimensions, batch size and data type passed to the __init__ function. In addition, the data buffer needs to be Fortran contiguous and located on the same device as the WorkStream passed to the __init__ function.

clone(
buf: ndarray,
) DenseState[source]#

Clone the state with a new data buffer.

Parameters:

buf – The data buffer to be attached to the new state.

Returns:

A state with same metadata as the original state and a new data buffer.

inner_product(other) ndarray[source]#

Compute the inner product(s) between two states.

Parameters:

other – The other state to compute inner product with.

Returns:

An array of inner product(s) of length batch_size.

inplace_accumulate(
other,
factors: Number | Sequence | ndarray | ndarray = 1,
) None[source]#

Inplace accumulate another state scaled by factor(s) into this state.

Parameters:
  • other – The other state to be scaled and accumulated into this state.

  • factors – Scalar factor(s) used in scaling other. If a single number is provided, scale all batched states in other by the same factor. Defaults to 1.

inplace_scale(
factors: Number | Sequence | ndarray | ndarray,
) None[source]#

Scale the state by scalar factor(s).

Parameters:

factors – Scalar factor(s) used in scaling the state. If a single number is provided, scale all batched states by the same factor.

norm() ndarray[source]#

Compute the squared Frobenius norm(s) of the state.

Returns:

An array of squared Frobenius norm(s) of length batch_size.

trace() ndarray[source]#

Compute the trace(s) of the state.

Returns:

An array of trace(s) of length batch_size.

view() ndarray[source]#

Return a multidimensional view on the local slice of the storage buffer.

Note

When batch_size is 1, the last mode of the view will be the batch mode of dimension 1.

Attributes

local_info#

Local storage buffer dimensions as well as local mode offsets.

Returns:

Tuple[int]

Local storage buffer dimensions, with the last dimension being the batch dimension.

Tuple[int]

Local mode offsets.

storage#

The state’s local storage buffer.

Returns:

The state’s local storage buffer.

Return type:

cp.ndarray

storage_size#

Storage buffer size in number of elements of data type dtype.

Returns:

Storage buffer size in number of elements of data type dtype.

Return type:

int