cuSPARSELt Data Types#
Opaque Data Structures#
cusparseLtHandle_t
#
The structure holds the cuSPARSELt library context (device properties, system information, etc.).The handle must be initialized and destroyed with cusparseLtInit() and cusparseLtDestroy() functions respectively.
cusparseLtMatDescriptor_t
#
The structure captures the shape and characteristics of a matrix.It is initialized with cusparseLtDenseDescriptorInit() or cusparseLtStructuredDescriptorInit() functions and destroyed with cusparseLtMatDescriptorDestroy().
cusparseLtMatmulDescriptor_t
#
The structure holds the description of the matrix multiplication operation.It is initialized with cusparseLtMatmulDescriptorInit() function.
cusparseLtMatmulAlgSelection_t
#
The structure holds the description of the matrix multiplication algorithm.It is initialized with cusparseLtMatmulAlgSelectionInit() function.
cusparseLtMatmulPlan_t
#
The structure holds the matrix multiplication execution plan, namely all the information necessary to execute thecusparseLtMatmul()
operation.It is initialized and destroyed with cusparseLtMatmulPlanInit() and cusparseLtMatmulPlanDestroy() functions respectively.
Enumerators#
cusparseLtSparsity_t
#
The enumerator specifies the sparsity ratio of the structured matrix as
Value |
Description |
---|---|
|
50% Sparsity Ratio: - paired 4:8 for - 1:2 for |
The sparsity property is used in the cusparseLtStructuredDescriptorInit() function.
cusparseComputeType
#
The enumerator specifies the compute precision modes of the matrix
Value |
Description |
---|---|
|
- Element-wise multiplication of matrix A and B, and accumulation of the intermediate values are performed with 32-bit integer precision. - Alpha and beta coefficients, and epilogue are performed with single precision floating-point. - Tensor Cores will be used whenever possible. |
|
- Element-wise multiplication of matrix A and B, and accumulation of the intermediate values are performed with single precision floating-point. - Alpha and beta coefficients, and epilogue are performed with single precision floating-point. - Tensor Cores will be used whenever possible. |
|
- Element-wise multiplication of matrix A and B, and accumulation of the intermediate values are performed with half precision floating-point. - Alpha and beta coefficients, and epilogue are performed with single precision floating-point. - Tensor Cores will be used whenever possible. |
The compute precision is used in the cusparseLtMatmulDescriptorInit() function.
cusparseLtMatDescAttribute_t
#
The enumerator specifies the additional attributes of a matrix descriptor
Value |
Description |
---|---|
|
Number of matrices in a batch |
|
Stride between consecutive matrices in a batch expressed in terms of matrix elements |
The algorithm enumerator is used in the cusparseLtMatDescSetAttribute() and cusparseLtMatDescGetAttribute() functions.
cusparseLtMatmulDescAttribute_t
#
The enumerator specifies the additional attributes of a matrix multiplication descriptor
Value |
Type |
Default Value |
Description |
---|---|---|---|
|
|
|
ReLU activation function |
|
|
|
Upper bound of the ReLU activation function |
|
|
|
Lower threshold of the ReLU activation function |
|
|
|
|
|
|
|
Scaling coefficient for the GeLU activation function. It implies |
|
|
|
Enable/Disable alpha vector (per-channel) scaling |
|
|
|
Enable/Disable beta vector (per-channel) scaling. |
|
|
|
Bias pointer. The bias vector size must equal to the number of rows of the output matrix (D). The data type of the bias vector is the same as the matric C except the following case:
in which the data type of the bias is |
|
|
|
Bias stride between consecutive bias vectors. |
|
|
|
Pointer to the prunned sparse matrix. |
|
|
|
Scaling mode that defines how the matrix scaling factor for matrix A is interpreted. |
|
|
|
Scaling mode that defines how the matrix scaling factor for matrix B is interpreted. |
|
|
|
Scaling mode that defines how the matrix scaling factor for matrix C is interpreted. |
|
|
|
Scaling mode that defines how the matrix scaling factor for matrix D is interpreted. |
|
|
|
Scaling mode that defines how the output matrix scaling factor for matrix D is interpreted. |
|
|
|
Pointer to the scale factor value that converts data in matrix A to the compute data type range. The scaling factor must have the same type as the compute type. If not specified, the scaling factor is assumed to be 1. |
|
|
|
Equivalent to |
|
|
|
Equivalent to |
|
|
|
Equivalent to |
|
|
|
Device pointer to the scale factors that are used to convert data in matrix D to the compute data
type range. The scaling factor value type is defined by the scaling mode
(see |
where the ReLU activation function is defined as:
CUSPARSELT_MATMUL_SPARSE_MAT_POINTER
provides more flexibility for cusparseLtMatmulSearch() to select the best algorithm. The referenced memory cannot be modified until cusparseLtMatmulSearch() is called.
cusparseLtMatmulAlg_t
#
The enumerator specifies the algorithm for matrix-matrix multiplication
Value |
Description |
---|---|
|
Default algorithm |
The algorithm enumerator is used in the cusparseLtMatmulAlgSelectionInit() function.
cusparseLtMatmulAlgAttribute_t
#
The enumerator specifies the matrix multiplication algorithm attributes
Value |
Description |
Possible Values |
---|---|---|
|
Algorithm ID |
[0, MAX) (see |
|
Algorithm ID limit (query only) |
|
|
Number of iterations (kernel launches per algorithm) for cusparseLtMatmulSearch() |
> 0 (default=5) |
|
Split-K factor (number of slices) |
On pre- |
|
Number of kernels for the Split-K algorithm |
|
|
Device memory buffers to store partial results for the reduction |
On pre- |
The algorithm attribute enumerator is used in the cusparseLtMatmulAlgGetAttribute() and cusparseLtMatmulAlgSetAttribute() functions.Split-K parameters allow users to split the GEMM computation along the K dimension so that more CTAs will be created with a better SM utilization when N or M dimensions are small. However, this comes with the cost of reducing the operation of K slides to the final results. The cusparseLtMatmulSearch() function can be used to find the optimal combination of Split-K parameters.Segment-K is a split-K method onSM 9.0
that utilizes warp-specialized persistent CTAs for enhanced efficiency and replaces the tranditional split-K method.Due to the varying validity of split-k attributesCUSPARSELT_MATMUL_SPLIT_K
,CUSPARSELT_MATMUL_SPLIT_K_MODE
andCUSPARSELT_MATMUL_SPLIT_K_BUFFERS
across different platforms, it’s recommended to keep their default values without a priori knowledge. For optimal performance users should invoke the auto-tuning API cusparseLtMatmulSearch() to determine the best algorithm and attributes.
cusparseLtSplitKMode_t
#
The enumerator specifies the Split-K mode values corresponding toCUSPARSELT_MATMUL_SPLIT_K_MODE
attribute in cusparseLtMatmulAlgAttribute_t
Value |
Description |
---|---|
|
Use a single kernel for Split-K. It’s the default value on pre- |
|
Use two kernels for Split-K; one GPU kernel to do GEMM and another to do the final reduction. Valid on pre- |
|
Use split-k decomposition. Valid on |
|
No spliting along the K dimenison. Valid on |
|
Use stream-K decomposition. Valid on |
|
Use a heuristic to determine the decomposition mode.
It’s the default value on |
cusparseLtPruneAlg_t
#
The enumerator specifies the pruning algorithm to apply to the structured matrix before the compression
Value |
Description |
---|---|
|
- - - |
|
- - - The strip direction is chosen according to the operation |
The pruning algorithm is used in the cusparseLtSpMMAPrune() function.
cusparseLtMatmulMatrixScale_t
#
The enumerator specifies scaling mode that defines how scaling factor pointers are interpreted.
Value |
Description |
---|---|
|
Scaling is disabled. This is the default and the only valid value for matricex that are not using narrow data types. |
|
Scaling factors are single-precision scalars applied to the whole matrix. This is the only value valid for |
|
Scaling factors are tensors that contain a dedicated scaling factor stored as an 8-bit |
|
Scaling factors are tensors that contain a dedicated scaling factor stored as an 8-bit |
cusparrseLtMatmulMatrixScale_t
is introduced for narrow precisions (E4M3 and E2M1) to be scaled or dequantized before and potentially quantized after computations. See 1D Block Scaling for FP8 and FP4 Data Types for more details. The translation from row
and column
indices to linear offset
is the same, as well as how multiple blocks are arranged. The only difference with cuBLASLt is the block size: in cuSPARSELt a single tile of scaling factors is applied to a 128x128 block when the scaling mode is CUSPARSELT_MATMUL_MATRIX_SCALE_VEC32_UE4M3
and to a 128x256 block when it is CUSPARSELT_MATMUL_MATRIX_SCALE_VEC64_UE8M0
.