cuquantum.NetworkOptions¶
- class cuquantum.NetworkOptions(compute_type: Optional[int] = None, device_id: Optional[int] = None, handle: Optional[int] = None, logger: Optional[logging.Logger] = None, memory_limit: Optional[Union[int, str]] = '80%', blocking: Literal[True, 'auto'] = True, allocator: Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager] = None)[source]¶
A data class for providing options to the
cuquantum.Network
object.- compute_type¶
CUDA compute type. A suitable compute type will be selected if not specified.
- device_id¶
CUDA device ordinal (used if the tensor network resides on the CPU). Device 0 will be used if not specified.
- Type
Optional[int]
- handle¶
cuTensorNet library handle. A handle will be created if one is not provided.
- Type
Optional[int]
- logger¶
Python Logger object. The root logger will be used if a logger object is not provided.
- Type
- memory_limit¶
Maximum memory available to cuTensorNet. It can be specified as a value (with optional suffix like K[iB], M[iB], G[iB]) or as a percentage. The default is 80% of the device memory.
- blocking¶
A flag specifying the behavior of the execution functions and methods, such as
Network.autotune()
andNetwork.contract()
. Whenblocking
isTrue
, these methods do not return until the operation is complete. When blocking is"auto"
, the methods return immediately when the input tensors are on the GPU. The execution methods always block when the input tensors are on the CPU. The default isTrue
.- Type
Literal[True, ‘auto’]
- allocator¶
An object that supports the
BaseCUDAMemoryManager
protocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc()
for PyTorch operands,cupy.cuda.alloc()
otherwise).- Type
Optional[cuquantum.cutensornet.memory.BaseCUDAMemoryManager]