MatmulNumericalImplFlags#

enum nvmath.linalg.advanced.MatmulNumericalImplFlags(value)[source]#

These flags can be combined with the | operator: OP_TYPE_FMA | OP_TYPE_TENSOR_HMMA …

Member Type:

int

Valid values are as follows:

OP_TYPE_FMA = <MatmulNumericalImplFlags.OP_TYPE_FMA: 1>#
OP_TYPE_TENSOR_HMMA = <MatmulNumericalImplFlags.OP_TYPE_TENSOR_HMMA: 2>#
OP_TYPE_TENSOR_IMMA = <MatmulNumericalImplFlags.OP_TYPE_TENSOR_IMMA: 4>#
OP_TYPE_TENSOR_DMMA = <MatmulNumericalImplFlags.OP_TYPE_TENSOR_DMMA: 8>#
OP_TYPE_TENSOR_MASK = <MatmulNumericalImplFlags.OP_TYPE_TENSOR_MASK: 254>#
OP_TYPE_MASK = <MatmulNumericalImplFlags.OP_TYPE_MASK: 255>#
ACCUMULATOR_16F = <MatmulNumericalImplFlags.ACCUMULATOR_16F: 256>#
ACCUMULATOR_32F = <MatmulNumericalImplFlags.ACCUMULATOR_32F: 512>#
ACCUMULATOR_64F = <MatmulNumericalImplFlags.ACCUMULATOR_64F: 1024>#
ACCUMULATOR_32I = <MatmulNumericalImplFlags.ACCUMULATOR_32I: 2048>#
ACCUMULATOR_TYPE_MASK = <MatmulNumericalImplFlags.ACCUMULATOR_TYPE_MASK: 65280>#
INPUT_TYPE_16F = <MatmulNumericalImplFlags.INPUT_TYPE_16F: 65536>#
INPUT_TYPE_16BF = <MatmulNumericalImplFlags.INPUT_TYPE_16BF: 131072>#
INPUT_TYPE_TF32 = <MatmulNumericalImplFlags.INPUT_TYPE_TF32: 262144>#
INPUT_TYPE_32F = <MatmulNumericalImplFlags.INPUT_TYPE_32F: 524288>#
INPUT_TYPE_64F = <MatmulNumericalImplFlags.INPUT_TYPE_64F: 1048576>#
INPUT_TYPE_8I = <MatmulNumericalImplFlags.INPUT_TYPE_8I: 2097152>#
INPUT_TYPE_8F_E4M3 = <MatmulNumericalImplFlags.INPUT_TYPE_8F_E4M3: 4194304>#
INPUT_TYPE_8F_E5M2 = <MatmulNumericalImplFlags.INPUT_TYPE_8F_E5M2: 8388608>#
INPUT_TYPE_MASK = <MatmulNumericalImplFlags.INPUT_TYPE_MASK: 16711680>#
GAUSSIAN = <MatmulNumericalImplFlags.GAUSSIAN: 4294967296>#
ALL = <MatmulNumericalImplFlags.ALL: 18446744073709551615>#