ContractionOptions#

class nvmath.tensor.ContractionOptions(
compute_type: int | None = None,
logger: Logger | None = None,
blocking: Literal[True, 'auto'] = 'auto',
handle: int | None = None,
allocator: BaseCUDAMemoryManager | None = None,
memory_limit: int | str | None = '80%',
)[source]#

A data class for providing options to the BinaryContraction and TernaryContraction objects, or the wrapper functions binary_contraction`and :func:`ternary_contraction().

compute_type#

The compute type to use for the contraction. See ComputeDesc for available compute types.

Type:

int | None

logger#

Python Logger object. The root logger will be used if a logger object is not provided.

Type:

logging.Logger

blocking#

A flag specifying the behavior of the execution functions and methods, such as binary_contraction() and TernaryContraction.execute(). When blocking is True, the execution methods do not return until the operation is complete. When blocking is "auto", the methods return immediately when the input tensor is on the GPU. The execution methods always block when the input tensor is on the CPU to ensure that the user doesn’t inadvertently use the result before it becomes available. The default is "auto".

Type:

Literal[True, ‘auto’]

handle#

cuTensor library handle. A handle will be created if one is not provided.

Type:

int | None

allocator#

An object that supports the BaseCUDAMemoryManager protocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc() for PyTorch operands, cupy.cuda.alloc() otherwise).

Type:

nvmath.memory.BaseCUDAMemoryManager | None

memory_limit#

Maximum memory available to the contraction operation. It can be specified as a value (with optional suffix like K[iB], M[iB], G[iB]) or as a percentage. The default is 80% of the device memory.

Type:

int | str | None