cuquantum.NetworkOptions¶
- class cuquantum.NetworkOptions(compute_type: Optional[int] = None, device_id: Optional[int] = None, handle: Optional[int] = None, logger: Optional[logging.Logger] = None, memory_limit: Optional[Union[int, str]] = '80%', blocking: Literal[True, 'auto'] = True, allocator: Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager] = None)¶
-
A data class for providing options to the
cuquantum.Networkobject.- compute_type¶
-
CUDA compute type. A suitable compute type will be selected if not specified.
- device_id¶
-
CUDA device ordinal (used if the tensor network resides on the CPU). Device 0 will be used if not specified.
- Type
-
Optional[int]
- handle¶
-
cuTensorNet library handle. A handle will be created if one is not provided.
- Type
-
Optional[int]
- logger¶
-
Python Logger object. The root logger will be used if a logger object is not provided.
- Type
- memory_limit¶
-
Maximum memory available to cuTensorNet. It can be specified as a value (with optional suffix like K[iB], M[iB], G[iB]) or as a percentage. The default is 80%.
- blocking¶
-
A flag specifying the behavior of the execution methods
Network.autotune()andNetwork.contract(). WhenblockingisTrue, these methods do not return until the operation is complete. When blocking is"auto", the methods return immediately when the input tensors are on the GPU. The execution methods always block when the input tensors are on the CPU. The default isTrue.- Type
-
Literal[True, ‘auto’]
- allocator¶
-
An object that supports the
BaseCUDAMemoryManagerprotocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc()for PyTorch operands,cupy.cuda.alloc()otherwise).- Type
-
Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager]