cuquantum.densitymat.WorkStream

class cuquantum.densitymat.WorkStream(device_id: Optional[int] = None, stream: Optional[cupy.cuda.stream.Stream] = None, memory_limit: Optional[Union[int, str]] = '80%', allocator: Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager] = <class 'cuquantum.cutensornet.memory._CupyCUDAMemoryManager'>, compute_type: Optional[str] = None, logger: Optional[logging.Logger] = None)[source]

A data class containing the library handle, stream, workspace and configuration parameters.

This object handles allocation and synchronization automatically. Additionally, a method to release the workspace is provided. The size of the workspace buffer is determined by either the memory_limit attribute or the maximum required workspace size among all objects using this WorkStream.

device_id

CUDA device ordinal (used if the tensor network resides on the CPU). Device 0 will be used if not specified.

Type

Optional[int]

stream

CUDA stream. The current stream will be used if not specified.

Type

Optional[cupy.cuda.stream.Stream]

memory_limit

Maximum memory available. It can be specified as a value (with optional suffix like K[iB], M[iB], G[iB]) or as a percentage. The default is 80% of the device memory.

Type

Optional[Union[int, str]]

allocator[source]

An object that supports the BaseCUDAMemoryManager protocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc() for PyTorch operands, cupy.cuda.alloc() otherwise).

Type

Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager]

compute_type

CUDA compute type. A suitable compute type will be selected if not specified.

Type

cuquantum.ComputeType

logger

Python Logger object. The root logger will be used if a logger object is not provided.

Type

logging.Logger

workspace_info

A property attribute that stores a 2-tuple of ints representing currently allocated and anticipated workspace size in bytes.

set_communicator(comm, provider='None') None[source]

Register a communicator with the library. Currently only mpi4py.Comm objects are supported and the only supported provider is "MPI".

get_proc_rank() int[source]

Return the process rank if a communicator was set previously via WorkStream.set_communicator().

get_num_ranks() int[source]

Return the number of processes in the communicator that was set previously via WorkStream.set_communicator().

get_communicator()[source]

Return the communicator object if set previously via WorkStream.set_communicator().

release_workspace(kind='SCRATCH') None[source]

Release the workspace.

Note

  • Releasing the workspace releases both its workspace buffer and resets the maximum required size among the objects that uses this WorkStream instance.

  • Objects which have previously been exposed to this WorkStream instance do not require explicit calls to their prepare methods after the workspace has been released.

  • Releasing the workspace buffer may be useful when intermediate computations do not involve the cuDensityMat API, or when the following computations require less workspace than the preceding ones.

  • Objects can only interact with each other if they use the same WorkStream and cannot change the WorkStream they use.

  • Some objects require a WorkStream at creation (State, OperatorAction), while other objects require it only when their prepare method is called (Operator).

  • Some objects acquire the WorkStream possibly indirectly (Operator), while other objects acquire it always indirectly (OperatorTerm, DenseOperator, MultidiagonalOperator).

Attention

The compute_type argument is currently not used and will default to the data type.

Examples

>>> import cupy as cp
>>> from cuquantum.densitymat import WorkStream

To create a WorkStream on a new CUDA stream, we can do

>>> ctx = WorkStream(stream=cp.cuda.Stream())