Matmul#
-
class nvmath.
linalg. advanced. Matmul( - a,
- b,
- /,
- c=None,
- *,
- alpha=None,
- beta=None,
- qualifiers=None,
- quantization_scales=None,
- options=None,
- stream: AnyStream | int | None = None,
Create a stateful object encapsulating the specified matrix multiplication computation \(\alpha a @ b + \beta c\) and the required resources to perform the operation. A stateful object can be used to amortize the cost of preparation (planning in the case of matrix multiplication) across multiple executions (also see the Stateful APIs section).
The function-form API
matmul()is a convenient alternative to using stateful objects for single use (the user needs to perform just one matrix multiplication, for example), in which case there is no possibility of amortizing preparatory costs. The function-form APIs are just convenience wrappers around the stateful object APIs.Using the stateful object typically involves the following steps:
Problem Specification: Initialize the object with a defined operation and options.
Preparation: Use
plan()to determine the best algorithmic implementation for this specific matrix multiplication operation.Execution: Perform the matrix multiplication computation with
execute().Resource Management: Ensure all resources are released either by explicitly calling
free()or by managing the stateful object within a context manager.
Detailed information on what’s happening in the various phases described above can be obtained by passing in a
logging.Loggerobject toMatmulOptionsor by setting the appropriate options in the root logger object, which is used by default:>>> import logging >>> logging.basicConfig( ... level=logging.INFO, ... format="%(asctime)s %(levelname)-8s %(message)s", ... datefmt="%m-%d %H:%M:%S", ... )
A user can select the desired logging level and, in general, take advantage of all of the functionality offered by the Python
loggingmodule.- Parameters:
a – A tensor representing the first operand to the matrix multiplication (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.b – A tensor representing the second operand to the matrix multiplication (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.c –
(Optional) A tensor representing the operand to add to the matrix multiplication result (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.Changed in version 0.3.0: In order to avoid broadcasting behavior ambiguity, nvmath-python no longer accepts a 1-D (vector)
c. Use a singleton dimension to convert your input array to 2-D.alpha – The scale factor for the matrix multiplication term as a real or complex number. The default is \(1.0\).
beta – The scale factor for the matrix addition term as a real or complex number. A value for
betamust be provided if operandcis specified.qualifiers – If desired, specify the matrix qualifiers as a
numpy.ndarrayofmatrix_qualifiers_dtypeobjects of length 3 corresponding to the operandsa,b, andc.options – Specify options for the matrix multiplication as a
MatmulOptionsobject. Alternatively, adictcontaining the parameters for theMatmulOptionsconstructor can also be provided. If not specified, the value will be set to the default-constructedMatmulOptionsobject.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.quantization_scales – Specify scale factors for the matrix multiplication as a
MatmulQuantizationScalesobject. Alternatively, adictcontaining the parameters for theMatmulQuantizationScalesconstructor can also be provided. Allowed and required only for narrow-precision (FP8 and lower) operations.
- Semantics:
The semantics of the matrix multiplication follows
numpy.matmul()semantics, with some restrictions on broadcasting. In addition, the semantics for the fused matrix addition are described below:If arguments
aandbare matrices, they are multiplied according to the rules of matrix multiplication.If argument
ais 1-D, it is promoted to a matrix by prefixing1to its dimensions. After matrix multiplication, the prefixed1is removed from the result’s dimensions.If argument
bis 1-D, it is promoted to a matrix by appending1to its dimensions. After matrix multiplication, the appended1is removed from the result’s dimensions.If
aorbis N-D (N > 2), then the operand is treated as a batch of matrices. If bothaandbare N-D, their batch dimensions must match. If exactly one ofaorbis N-D, the other operand is broadcast.The operand for the matrix addition
cmay be a matrix of shape (M, 1) or (M, N), or the batched versions (…, M, 1) or (…, M, N). Here M and N are the dimensions of the result of the matrix multiplication. If N = 1, the columns ofcare broadcast for the addition; the rows ofcare never broadcast. If batch dimensions are not present,cis broadcast across batches as needed.Similarly, when operating on a batch, auxiliary outputs are 3-D for all epilogs. Therefore, epilogs that return 1-D vectors of length N in non-batched mode return 3-D matrices of size (batch, N, 1) in batched mode.
- Narrow-precision support:
Matrix multiplication with narrow-precision operands is supported, in both FP8 and MXFP8 formats.
Note
Narrow-precision matrix multiplication in nvmath-python requires CUDA Toolkit 12.8 or newer. FP8 requires a device with compute capability 8.9 or higher (Ada, Hopper, Blackwell or newer architecture). MXFP8 requires a device with compute capability 10.0 or higher (Blackwell or newer architecture). Please refer to the compute capability table to check the compute capability of your device.
For FP8 operations:
For each operand a scaling factor needs to be specified via
quantization_scalesargument.Maximum absolute value of the result (amax) can be requested via
result_amaxoption inoptionsargument.Custom result type (both FP8 and non-FP8) can be requested via
result_typeoption inoptionsargument.
For MXFP8 operations:
To enable MXFP8 operations,
block_scalingoption must be set toTrue.Block scaling factors need to be specified via
quantization_scalesargument.Utilities in
nvmath.can be used to create and modify block scaling factors.linalg. advanced. helpers. matmul When MXFP8 is used and the result type is a narrow-precision data type, the auxiliary output
"d_out_scale"will be returned in the auxiliary output tensor. It will contain the scales that were used for the result quantization.
Please refer to the examples and narrow-precision operations tutorial for more details. For more details on the FP8 and MXFP8 formats in cuBLAS, see the cublasLtMatmul documentation.
See also
Examples
>>> import numpy as np >>> import nvmath
Create two 2-D float64 ndarrays on the CPU:
>>> M, N, K = 1024, 1024, 1024 >>> a = np.random.rand(M, K) >>> b = np.random.rand(K, N)
We will define a matrix multiplication operation followed by a RELU epilog function using the specialized matrix multiplication interface.
Create a Matmul object encapsulating the problem specification above:
>>> mm = nvmath.linalg.advanced.Matmul(a, b)
Options can be provided above to control the behavior of the operation using the
optionsargument (seeMatmulOptions).Next, plan the operation. The epilog is specified, and optionally, preferences can be specified for planning:
>>> epilog = nvmath.linalg.advanced.MatmulEpilog.RELU >>> algorithms = mm.plan(epilog=epilog)
Certain epilog choices (like
nvmath.) require additional input provided using thelinalg. advanced. MatmulEpilog. BIAS epilog_inputsargument toplan().Now execute the matrix multiplication, and obtain the result
r1as a NumPy ndarray.>>> r1 = mm.execute()
Finally, free the object’s resources. To avoid having to explicitly making this call, it’s recommended to use the Matmul object as a context manager as shown below, if possible.
>>> mm.free()
Note that all
Matmulmethods execute on the current stream by default. Alternatively, thestreamargument can be used to run a method on a specified stream.Let’s now look at the same problem with CuPy ndarrays on the GPU.
Create a 3-D complex128 CuPy ndarray on the GPU:
>>> import cupy as cp >>> a = cp.random.rand(M, K) >>> b = cp.random.rand(K, N)
Create an Matmul object encapsulating the problem specification described earlier and use it as a context manager.
>>> with nvmath.linalg.advanced.Matmul(a, b) as mm: ... algorithms = mm.plan(epilog=epilog) ... ... # Execute the operation to get the first result. ... r1 = mm.execute() ... ... # Update operands A and B in-place (see reset_operands() for an ... # alternative). ... a[:] = cp.random.rand(M, K) ... b[:] = cp.random.rand(K, N) ... ... # Execute the operation to get the new result. ... r2 = mm.execute()
All the resources used by the object are released at the end of the block.
Further examples can be found in the nvmath/examples/linalg/advanced/matmul directory.
Methods
- __init__(
- a,
- b,
- /,
- c=None,
- *,
- alpha=None,
- beta=None,
- qualifiers=None,
- quantization_scales=None,
- options=None,
- stream: AnyStream | int | None = None,
- applicable_algorithm_ids(limit=8)[source]#
Obtain the algorithm IDs that are applicable to this matrix multiplication.
- Parameters:
limit – The maximum number of applicable algorithm IDs that is desired
- Returns:
A sequence of algorithm IDs that are applicable to this matrix multiplication problem specification, in random order.
- autotune( )[source]#
Autotune the matrix multiplication to order the algorithms from the fastest measured execution time to the slowest. Once autotuned, the optimally-ordered algorithm sequence can be accessed using
algorithms.Note
This function will benchmark each of the algorithms and order the algorithms based on the benchmark results. The measurements can be impacted by factors such as GPU temperature, clock settings, or power consumption. Autotuning in an unstable environment can result in a suboptimal algorithm ordering. If you experience performance problems, consider omitting the autotuning.
- Parameters:
iterations – The number of autotuning iterations to perform.
prune – An integer N, specifying the top N fastest algorithms to retain after autotuning. The default is to retain all algorithms.
release_workspace – A value of
Truespecifies that the stateful object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different)execute()API, but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.
- execute( )[source]#
Execute a prepared (planned and possibly autotuned) matrix multiplication.
- Parameters:
algorithm – (Experimental) An algorithm chosen from the sequence returned by
plan()oralgorithms. By default, the first algorithm in the sequence is used.release_workspace – A value of
Truespecifies that the stateful object should release workspace memory back to the package memory pool on function return, while a value ofFalsespecifies that the object should retain the memory. This option may be set toTrueif the application performs other operations that consume a lot of memory between successive calls to the (same or different)execute()API, but incurs a small overhead due to obtaining and releasing workspace memory from and to the package memory pool on every call. The default isFalse.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.
- Returns:
The result of the specified matrix multiplication (epilog applied), which remains on the same device and belong to the same package as the input operands. If an epilog (like
nvmath.) that results in extra output is used, or an extra output is requested (for example by settinglinalg. advanced. MatmulEpilog. RELU_AUX result_amaxoption inoptionsargument), a tuple is returned with the first element being the matrix multiplication result (epilog applied) and the second element being the auxiliary output provided as adict.
- free()[source]#
Free Matmul resources.
It is recommended that the
Matmulobject be used within a context, but if it is not possible then this method must be called explicitly to ensure that the matrix multiplication resources (especially internal library objects) are properly cleaned up.
- plan(
- *,
- preferences=None,
- algorithms=None,
- epilog=None,
- epilog_inputs=None,
- stream: AnyStream | int | None = None,
Plan the matrix multiplication operation, considering the epilog (if provided).
- Parameters:
preferences – This parameter specifies the preferences for planning as a
MatmulPlanPreferencesobject. Alternatively, a dictionary containing the parameters for theMatmulPlanPreferencesconstructor can also be provided. If not specified, the value will be set to the default-constructedMatmulPlanPreferencesobject.algorithms – A sequence of
Algorithmobjects that can be directly provided to bypass planning. The algorithm objects must be compatible with the matrix multiplication. A typical use for this option is to provide algorithms serialized (pickled) from a previously planned and autotuned matrix multiplication.epilog – Specify an epilog \(F\) as an object of type
MatmulEpilogto apply to the result of the matrix multiplication: \(F(\alpha A @ B + \beta C\)). The default is no epilog. See cuBLASLt documentation for the list of available epilogs.epilog_inputs – Specify the additional inputs needed for the selected epilog as a dictionary, where the key is the epilog input name and the value is the epilog input. The epilog input must be a tensor with the same package and in the same memory space as the operands (see the constructor for more information on the operands). If the required epilog inputs are not provided, an exception is raised that lists the required epilog inputs. Some epilog inputs are generated by other epilogs. For example, the epilog input for
MatmulEpilog.DRELUis generated by matrix multiplication with the same operands usingMatmulEpilog.RELU_AUX.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.
- Returns:
A sequence of
nvmath.objects that are applicable to this matrix multiplication problem specification, heuristically ordered from fastest to slowest.linalg. advanced. Algorithm
Notes
Epilogs that have
BIASin their name need an epilog input with the key'bias'. Epilogs that haveDRELUneed an epilog input with the key'relu_aux', which is produced in a “forward pass” epilog likeRELU_AUXorRELU_AUX_BIAS. Similarly, epilogs withDGELUin their name require an epilog input with the key'gelu_aux', produced in the corresponding forward pass operation.Examples
>>> import numpy as np >>> import nvmath
Create two 3-D float64 ndarrays on the CPU representing batched matrices, along with a bias vector:
>>> batch = 32 >>> M, N, K = 1024, 1024, 1024 >>> a = np.random.rand(batch, M, K) >>> b = np.random.rand(batch, K, N) >>> # The bias vector will be broadcast along the columns, as well as along the >>> # batch dimension. >>> bias = np.random.rand(M)
We will define a matrix multiplication operation followed by a
nvmath.epilog function.linalg. advanced. MatmulEpilog. RELU_BIAS >>> with nvmath.linalg.advanced.Matmul(a, b) as mm: ... # Plan the operation with RELU_BIAS epilog and corresponding epilog ... # input. ... p = nvmath.linalg.advanced.MatmulPlanPreferences(limit=8) ... epilog = nvmath.linalg.advanced.MatmulEpilog.RELU_BIAS ... epilog_inputs = {"bias": bias} ... # The preferences can also be provided as a dict: {'limit': 8} ... algorithms = mm.plan( ... preferences=p, ... epilog=epilog, ... epilog_inputs=epilog_inputs, ... ) ... ... # Execute the matrix multiplication, and obtain the result `r` as a ... # NumPy ndarray. ... r = mm.execute()
Some epilogs like
nvmath.produce auxiliary output.linalg. advanced. MatmulEpilog. RELU_AUX >>> with nvmath.linalg.advanced.Matmul(a, b) as mm: ... # Plan the operation with RELU_AUX epilog> ... epilog = nvmath.linalg.advanced.MatmulEpilog.RELU_AUX ... algorithms = mm.plan(epilog=epilog) ... ... # Execute the matrix multiplication, and obtain the result `r` along ... # with the auxiliary output. ... r, auxiliary = mm.execute()
The auxiliary output is a Python
dictwith the names of each auxiliary output as keys.Further examples can be found in the nvmath/examples/linalg/advanced/matmul directory.
- reset_operands(
- a=None,
- b=None,
- c=None,
- *,
- alpha=None,
- beta=None,
- quantization_scales=None,
- epilog_inputs=None,
- stream: AnyStream | int | None = None,
Reset the operands held by this
Matmulinstance.- This method has two use cases:
it can be used to provide new operands for execution when the original operands are on the CPU
it can be used to release the internal reference to the previous operands and make their memory available for other use by passing
Nonefor all arguments. In this case, this method must be called again to provide the desired operands before another call to execution APIs likeautotune()orexecute().
This method is not needed when the operands reside on the GPU and in-place operations are used to update the operand values.
This method will perform various checks on the new operands to make sure:
The shapes, strides, datatypes match those of the old ones.
The packages that the operands belong to match those of the old ones.
If input tensors are on GPU, the device must match.
- Parameters:
a – A tensor representing the first operand to the matrix multiplication (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.b – A tensor representing the second operand to the matrix multiplication (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.c –
(Optional) A tensor representing the operand to add to the matrix multiplication result (see Semantics). The currently supported types are
numpy.ndarray,cupy.ndarray, andtorch.Tensor.Changed in version 0.3.0: In order to avoid broadcasting behavior ambiguity, nvmath-python no longer accepts a 1-D (vector)
c. Use a singleton dimension to convert your input array to 2-D.alpha – The scale factor for the matrix multiplication term as a real or complex number. The default is \(1.0\).
beta – The scale factor for the matrix addition term as a real or complex number. A value for
betamust be provided if operandcis specified.epilog_inputs – Specify the additional inputs needed for the selected epilog as a dictionary, where the key is the epilog input name and the value is the epilog input. The epilog input must be a tensor with the same package and in the same memory space as the operands (see the constructor for more information on the operands). If the required epilog inputs are not provided, an exception is raised that lists the required epilog inputs. Some epilog inputs are generated by other epilogs. For example, the epilog input for
MatmulEpilog.DRELUis generated by matrix multiplication with the same operands usingMatmulEpilog.RELU_AUX.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t(as Pythonint),cupy.cuda.Stream, andtorch.cuda.Stream. If a stream is not provided, the current stream from the operand package will be used.quantization_scales – Specify scale factors for the matrix multiplication as a
MatmulQuantizationScalesobject. Alternatively, adictcontaining the parameters for theMatmulQuantizationScalesconstructor can also be provided. Allowed and required only for narrow-precision (FP8 and lower) operations.
Examples
>>> import cupy as cp >>> import nvmath
Create two 3-D float64 ndarrays on the GPU:
>>> M, N, K = 128, 128, 256 >>> a = cp.random.rand(M, K) >>> b = cp.random.rand(K, N)
Create an matrix multiplication object as a context manager
>>> with nvmath.linalg.advanced.Matmul(a, b) as mm: ... # Plan the operation. ... algorithms = mm.plan() ... ... # Execute the MM to get the first result. ... r1 = mm.execute() ... ... # Reset the operands to new CuPy ndarrays. ... c = cp.random.rand(M, K) ... d = cp.random.rand(K, N) ... mm.reset_operands(c, d) ... ... # Execute to get the new result corresponding to the updated operands. ... r2 = mm.execute()
Note that if only a subset of operands are reset, the operands that are not reset hold their original values.
With
reset_operands(), minimal overhead is achieved as problem specification and planning are only performed once.For the particular example above, explicitly calling
reset_operands()is equivalent to updating the operands in-place, i.e, replacingmm.reset_operand(c, d)witha[:]=candb[:]=d. Note that updating the operand in-place should be adopted with caution as it can only yield the expected result under the additional constraint below:The operand is on the GPU (more precisely, the operand memory space should be accessible from the execution space).
For more details, please refer to inplace update example.
Attributes
- algorithms#
After planning using
plan(), get the sequence of algorithm objects to inquire their capabilities, configure them, or serialize them for later use.- Returns:
A sequence of
nvmath.objects that are applicable to this matrix multiplication problem specification.linalg. advanced. Algorithm