DecompositionOptions#
-
class cuquantum.
tensornet. tensor. DecompositionOptions( - compute_type: int | None = None,
- device_id: int | None = None,
- handle: int | None = None,
- logger: Logger | None = None,
- memory_limit: int | str | None = '80%',
- blocking: Literal[True, 'auto'] = True,
- allocator: BaseCUDAMemoryManager | None = None,
A data class for providing options to the
cuquantum.
object.tensornet. Network - compute_type#
CUDA compute type. A suitable compute type will be selected if not specified.
- Type:
- device_id#
CUDA device ordinal (used if the tensor network resides on the CPU). Device 0 will be used if not specified.
- Type:
int | None
- handle#
cuTensorNet library handle. A handle will be created if one is not provided.
- Type:
int | None
- logger#
Python Logger object. The root logger will be used if a logger object is not provided.
- Type:
- memory_limit#
Maximum memory available to cuTensorNet. It can be specified as a value (with optional suffix like K[iB], M[iB], G[iB]) or as a percentage. The default is 80% of the device memory.
- blocking#
A flag specifying the behavior of the execution functions and methods, such as
Network.autotune()
andNetwork.contract()
. Whenblocking
isTrue
, these methods do not return until the operation is complete. When blocking is"auto"
, the methods return immediately when the input tensors are on the GPU. The execution methods always block when the input tensors are on the CPU. The default isTrue
.- Type:
Literal[True, ‘auto’]
- allocator#
An object that supports the
BaseCUDAMemoryManager
protocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc()
for PyTorch operands,cupy.cuda.alloc()
otherwise).- Type: