MatmulOptions#

class nvmath.distributed.linalg.advanced.MatmulOptions(
compute_type: int | None = None,
scale_type: int | None = None,
result_type: int | None = None,
algo_type: int | None = None,
sm_count_communication: int | None = None,
logger: Logger | None = None,
blocking: Literal[True, 'auto'] = 'auto',
)[source]#

A data class for providing options to the Matmul object and the wrapper function matmul().

compute_type#

CUDA compute type. A suitable compute type will be selected if not specified.

Type:

nvmath.distributed.linalg.ComputeType

scale_type#

CUDA data type. A suitable data type consistent with the compute type will be selected if not specified.

Type:

nvmath.CudaDataType

result_type#

CUDA data type. A requested datatype of the result. If not specified, this type will be determined based on the input types.

Type:

nvmath.CudaDataType

algo_type#

Hints the algorithm type to be used. If not supported, cuBLASMp will fallback to the default algorithm.

Type:

nvmath.distributed.linalg.advanced.MatmulAlgoType

sm_count_communication#

The number of SMs to use for communication. This is only relevant for some algorithms (please consult cuBLASMp documentation).

Type:

int

logger#

Python Logger object. The root logger will be used if a logger object is not provided.

Type:

logging.Logger

blocking#

A flag specifying the behavior of the execution functions and methods, such as matmul() and Matmul.execute(). When blocking is True, the execution methods do not return until the operation is complete. When blocking is "auto", the methods return immediately when the inputs are on the GPU. The execution methods always block when the operands are on the CPU to ensure that the user doesn’t inadvertently use the result before it becomes available. The default is "auto".

Type:

Literal[True, ‘auto’]

See also

Matmul, matmul()