MatmulOptions#
-
class nvmath.
linalg. MatmulOptions( - *,
- allocator: ~nvmath.memory.BaseCUDAMemoryManager | ~nvmath.memory.BaseCUDAMemoryManagerAsync | None = None,
- blocking: ~typing.Literal[True,
- 'auto'] = 'auto',
- logger: ~logging.Logger = <factory>,
- inplace: bool = False,
A dataclass for providing options to a
Matmulobject.- allocator#
An object that supports the
BaseCUDAMemoryManagerprotocol, used to draw device memory. If an allocator is not provided, a memory allocator from the library package will be used (torch.cuda.caching_allocator_alloc()for PyTorch operands,cupy.cuda.alloc()otherwise).
- blocking#
A flag specifying the behavior of the stream-ordered functions and methods. When
blockingisTrue, the stream-ordered methods do not return until the operation is complete. Whenblockingis"auto", the methods return immediately when the inputs are on the GPU. The stream-ordered methods always block when the operands are on the CPU to ensure that the user doesn’t inadvertently use the result before it becomes available. The default is"auto".- Type:
Literal[True, ‘auto’]
- inplace#
Whether the matrix multiplication is performed in-place (operand C is overwritten).
- Type:
- logger#
Python Logger object. The root logger will be used if a logger object is not provided.
- Type:
See also
StatefulAPI