## Abstract

This guide provides background on matrix multiplications and their use in many deep learning operations; the trends described here form the basis of performance trends in fully-connected, convolutional, and recurrent layers, among others.

## 1. Background: Matrix-Matrix Multiplication

GEMMs (General Matrix Multiplications) are a fundamental building block for many operations in neural networks, for example fully-connected layers, recurrent layers such as RNNs, LSTMs or GRUs, and convolutional layers. In this guide, we describe GEMM performance fundamentals common to understanding the performance of such layers.

GEMM is defined as the operation *C*=α*AB*+β*C*, with *A* and *B* as
matrix inputs, α and β as scalar inputs, and *C* as a pre-existing matrix which is
overwritten by the output. A plain matrix product *AB* is a GEMM with α equal to one and
β equal to zero. For example, in the forward pass of a fully-connected layer, the weight
matrix would be argument *A*, incoming activations would be argument *B*, and α and
β would typically be 1 and 0, respectively. β can be 1 in some cases, for example, if we’re
combining the addition of a skip-connection with a linear operation.

## 2. Math And Memory Bounds

Following the convention of various linear algebra libraries (such as BLAS), we will say that matrix A is an M x K matrix, meaning that it has M rows and K columns. Similarly, B and C will be assumed to be K x N and M x N matrices, respectively.

The product of A and B has M x N values, each of which is a dot-product of K-element vectors. Thus, a total of M * N * K fused multiply-adds (FMAs) are needed to compute the product. Each FMA is 2 operations, a multiply and an add, so a total of 2 * M * N * K flops are required. For simplicity, we are ignoring the α and β parameters for now; as long as K is sufficiently large, their contribution to arithmetic intensity is negligible.

To estimate if a particular matrix multiply is math or memory limited, we compare its
arithmetic intensity to the ops:byte ratio of the GPU, as described in the __Understanding Performance__ section in the *GPU
Performance Background User Guide*. Assuming a Tesla V100 GPU and Tensor Core operations
on FP16 inputs with FP32 accumulation, the FLOPS:B ratio is 138.9 if data is loaded from the
GPU’s memory.

As an example, let’s consider a M x N x K = 8192 x 128 x 8192 GEMM. For this specific case, the arithmetic intensity is 124.1 FLOPS/B, lower than V100’s 138.9 FLOPS:B, thus this operation would be memory limited. If we increase the GEMM size to 8192 x 8192 x 8192 arithmetic intensity increases to 2730, much higher than FLOPS:B of V100 and therefore the operation is math limited. In particular, it follows from this analysis that matrix-vector products (general matrix-vector product or GEMV), where either M=1 or N=1, are always memory limited; their arithmetic intensity is less than 1.

It is worth keeping in mind that the comparison of arithmetic intensity with the ops:byte ratio is a simplified rule of thumb, and does not consider many practical aspects of implementing this computation (such as non-algorithm instructions like pointer arithmetic, or the contribution of the GPU’s on-chip memory hierarchy).

### 2.1. GPU Implementation

GPUs implement GEMMs by partitioning the output matrix into tiles, which are then assigned to thread blocks.

Tile size, in this guide, usually refers to the dimensions of these tiles (*Mtile* x
*Ntile* in Figure 1). Each thread block computes its output tile
by stepping through the K dimension in tiles, loading the required values from the A and B
matrices, and multiplying and accumulating them into the output.

### 2.2. Tensor Core Requirements

As we discussed in the __GPU Architecture Fundamentals__ section in the
*GPU Performance Background User Guide*, the latest NVIDIA GPUs have introduced Tensor
Cores to maximize the speed of tensor multiplies. In order to use Tensor Cores, NVIDIA
libraries require that matrix dimensions M, N, and K are multiples of 8 (with FP16 data) or 16
(with INT8 data).

The requirement is in fact more relaxed - only the fastest varying dimensions in memory are required to obey this rule - but it is easiest to just think of all three dimensions the same way. When the dimensions are not multiples of 8 (or 16), libraries will revert to a slower implementation without Tensor Cores. This effect can be seen in Figure 2 - as we change the M dimension, cases that are multiples of 8 are executed on Tensor Cores, resulting in a speedup of about 6x. For this reason, we recommend padding dimensions where necessary to enable Tensor Cores.

### 2.3. Typical Tile Dimensions In cuBLAS And Performance

The cuBLAS library contains NVIDIA’s optimized GPU GEMM implementations (refer to __here__ for documentation).

While multiple tiling strategies are available, larger tiles have more data reuse, allowing them to use less bandwidth and be more efficient than smaller tiles. On the other hand, for a problem of a given size, using larger tiles will generate fewer tiles to run in parallel, which can potentially lead to under-utilization of the GPU. When frameworks like TensorFlow or PyTorch call into cuBLAS with specific GEMM dimensions, a heuristic inside cuBLAS is used to select one of the tiling options expected to perform the best. Alternatively, some frameworks provide a “benchmark” mode, where prior to the training they time all implementation choices and pick the fastest one (this constitutes a once per training session overhead).

This tradeoff between tile efficiency and tile parallelism suggests that the larger the GEMM, the less important this tradeoff is: at some point, a GEMM has enough work to use the largest available tiles and still fill the GPU. Conversely, if GEMMs is too small, the reduction in either tile efficiency or tile parallelism will likely prevent the GPU from running at peak math utilization. Figure 3 and Figure 4 illustrate this general trend; larger GEMMs achieve higher throughput.

- 256x128 and 128x256 (most efficient)
- 128x128
- 256x64 and 64x256
- 128x64 and 64x128
- 64x64 (least efficient)

Figure 5 shows an example of the efficiency difference between a few of these tile sizes:

The chart shows the performance of a MxNxK = 1280x2040x4096 GEMM with different tile sizes. It demonstrates that the increased tile parallelism with smaller tiles (64x64 enables 8x more parallelism than 256x128) comes at notable efficiency cost. In practice, cuBLAS will avoid using small tiles for GEMMs that are large enough to have sufficient parallelism with larger tiles and will resort to the smaller ones only when substantially smaller GEMMs than the one in this example are being run. As a side note, NVIDIA libraries also have the ability to “tile” along the K dimension in case both M and N are small but K is large. Because K is the direction of the dot product, tiling in K requires a reduction at the end, which can limit achievable performance. For simplicity, most of this guide assumes no K tiling.

## 3. Dimension Quantization Effects

As described in the __GPU Execution Model__ section in the *GPU
Performance Background User Guide*, a GPU function is executed by launching a number of
thread blocks, each with the same number of threads. This introduces two potential effects on
execution efficiency - tile and wave quantization.

### 3.1. Tile Quantization

Tile quantization occurs when matrix dimensions are not divisible by the thread block tile size.

The number of thread block tiles is large enough to make sure all output elements are covered, however, some tiles have very little actual work as illustrated in Figure 6, which assumes 128x128 tiles and two matrix dimensions.

While libraries ensure that invalid memory accesses are not performed by any of the tiles, all tiles will perform the same amount of math. Thus, due to tile quantization, the case in Figure 6 (b) executes 1.5x as many arithmetic operations as Figure 6 (a) despite needing only 0.39% more operations algorithmically. As this shows, the highest utilization is achieved when output matrix dimensions are divisible by tile dimensions.

For another example of this effect, let’s consider GEMM for various choices of N, with M = 20480, K = 4096, and a library function that uses 256x128 tiles. As N increases from 136 to 256 in increments of 8, the Tensor Core accelerated GEMM always runs the same number of tiles, meaning the N dimension is always divided into 2 tiles. While the number of tiles remains constant, the fraction of those tiles containing useful data and hence the number of useful FLOPS performed increase with N, as reflected by the GFLOPS in Figure 7 below. Notice that throughput reduces by nearly half between N = 128 (where the single tile per row is filled with useful data) and N = 136 (where a second tile is added per row but contains only 8/128 = 6.25% useful data). Also, note how the duration is constant whenever the number of tiles is constant.

### 3.2. Wave Quantization

While tile quantization means the problem size is quantized to the size of each tile, there is a second quantization effect where the total number of tiles is quantized to the number of multiprocessors on the GPU: Wave quantization.

Let’s consider a related example to the one before, again varying N and with K = 4096, but with a smaller M = 1280. A Volta V100 GPU has 80 SMs; in the particular case of 256x128 thread block tiles, it can execute one thread block per SM, leading to a wave size of 80 tiles that can execute simultaneously. Thus, GPU utilization will be highest when the number of tiles is an integer multiple of 80 or just below.

The M dimension will always be divided into 1280/256 = 5 tiles per column. When N = 2048, the N dimension is divided into 2048/128 = 16 tiles per row, and a total of 5*16 = 80 tiles are created, comprising one full wave. When 2048 < N <= 2176, an additional tile per row is created for a total of 5*17 = 85 tiles, leading to one full wave and a ‘tail’ wave of only 5 tiles. The tail wave takes nearly the same time to execute as the full 80-tile wave in this example but uses only 5/80 = 6.25% of V100’s SMs during that time. Consequently, GFLOPS roughly halve and duration roughly doubles from N = 2048 to N = 2056 (Figure 8). Similar jumps can be seen after N = 4096, N = 6144, and N = 8192, which also each map to an integer number of full waves.

It is worth noting that the throughput and duration graphs for wave quantization look very similar to those for tile quantization, except with a different scale on the horizontal axis. Because both phenomena are quantization effects, this is expected. The difference lies in where the quantization occurs: tile quantization means work is quantized to the size of the tile, whereas wave quantization means work is quantized to the size of the GPU. Figure 7 (c) and Figure 8 (c) in both the tile and wave quantization illustrations show this difference.

## Notices

### Notice

_{This document is provided for information purposes only and shall
not be regarded as a warranty of a certain functionality,
condition, or quality of a product. NVIDIA Corporation
(“NVIDIA”) makes no representations or warranties, expressed
or implied, as to the accuracy or completeness of the
information contained in this document and assumes no
responsibility for any errors contained herein. NVIDIA shall
have no liability for the consequences or use of such
information or for any infringement of patents or other
rights of third parties that may result from its use. This
document is not a commitment to develop, release, or deliver
any Material (defined below), code, or
functionality.}

_{NVIDIA reserves the right to make corrections, modifications,
enhancements, improvements, and any other changes to this
document, at any time without notice.}

_{Customer should obtain the latest relevant information before
placing orders and should verify that such information is
current and complete.}

_{NVIDIA products are sold subject to the NVIDIA standard terms and
conditions of sale supplied at the time of order
acknowledgement, unless otherwise agreed in an individual
sales agreement signed by authorized representatives of
NVIDIA and customer (“Terms of Sale”). NVIDIA hereby
expressly objects to applying any customer general terms and
conditions with regards to the purchase of the NVIDIA
product referenced in this document. No contractual
obligations are formed either directly or indirectly by this
document.}

_{NVIDIA products are not designed, authorized, or warranted to be
suitable for use in medical, military, aircraft, space, or
life support equipment, nor in applications where failure or
malfunction of the NVIDIA product can reasonably be expected
to result in personal injury, death, or property or
environmental damage. NVIDIA accepts no liability for
inclusion and/or use of NVIDIA products in such equipment or
applications and therefore such inclusion and/or use is at
customer’s own risk.}

_{NVIDIA makes no representation or warranty that products based on
this document will be suitable for any specified use.
Testing of all parameters of each product is not necessarily
performed by NVIDIA. It is customer’s sole responsibility to
evaluate and determine the applicability of any information
contained in this document, ensure the product is suitable
and fit for the application planned by customer, and perform
the necessary testing for the application in order to avoid
a default of the application or the product. Weaknesses in
customer’s product designs may affect the quality and
reliability of the NVIDIA product and may result in
additional or different conditions and/or requirements
beyond those contained in this document. NVIDIA accepts no
liability related to any default, damage, costs, or problem
which may be based on or attributable to: (i) the use of the
NVIDIA product in any manner that is contrary to this
document or (ii) customer product designs.}

_{No license, either expressed or implied, is granted under any NVIDIA
patent right, copyright, or other NVIDIA intellectual
property right under this document. Information published by
NVIDIA regarding third-party products or services does not
constitute a license from NVIDIA to use such products or
services or a warranty or endorsement thereof. Use of such
information may require a license from a third party under
the patents or other intellectual property rights of the
third party, or a license from NVIDIA under the patents or
other intellectual property rights of NVIDIA.}

_{Reproduction of information in this document is permissible only if
approved in advance by NVIDIA in writing, reproduced without
alteration and in full compliance with all applicable export
laws and regulations, and accompanied by all associated
conditions, limitations, and notices.}

_{THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE
BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER
DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING
PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED,
IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE
MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF
NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A
PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN
NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING
WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL,
INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER
CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING
OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding
any damages that customer might incur for any reason
whatsoever, NVIDIA’s aggregate and cumulative liability
towards customer for the products described herein shall be
limited in accordance with the Terms of Sale for the
product.}

### VESA DisplayPort

_{DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for
Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are
trademarks owned by the Video Electronics Standards Association in the United
States and other countries.}

### HDMI

_{HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or
registered trademarks of HDMI Licensing LLC.}

### Trademarks

_{NVIDIA, the NVIDIA logo, and CUDA, CUDA Toolkit, GPU, NVLink, NVIDIA Deep
Learning SDK, NVIDIA Developer Program, NVIDIA GPU Cloud, Turing, and Volta are
trademarks and/or registered trademarks of NVIDIA Corporation in the United
States and other countries. Other company and product names may be trademarks of
the respective companies with which they are associated.}