cuSPARSELt Functions

Library Management Functions

cusparseLtInit

cusparseStatus_t
cusparseLtInit(cusparseLtHandle_t* handle)
The function initializes the cuSPARSELt library handle (cusparseLtHandle_t) which holds the cuSPARSELt library context. It allocates light hardware resources on the host, and must be called prior to making any other cuSPARSELt library calls. Calling any cusparseLt function which uses cusparseLtHandle_t without a previous call of cusparseLtInit() will return an error.
The cuSPARSELt library context is tied to the current CUDA device. To use the library on multiple devices, one cuSPARSELt handle should be created for each device.

Parameter

Memory

In/Out

Description

handle

Host

OUT

cuSPARSELt library handle

See cusparseStatus_t for the description of the return status.


cusparseLtDestroy

cusparseStatus_t
cusparseLtDestroy(const cusparseLtHandle_t* handle)
The function releases hardware resources used by the cuSPARSELt library. This function is the last call with a particular handle to the cuSPARSELt library.
Calling any cusparseLt function which uses cusparseLtHandle_t after cusparseLtDestroy() will return an error.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

See cusparseStatus_t for the description of the return status.


Matrix Descriptor Functions

cusparseLtDenseDescriptorInit

cusparseStatus_t
cusparseLtDenseDescriptorInit(const cusparseLtHandle_t*  handle,
                              cusparseLtMatDescriptor_t* matDescr,
                              int64_t                    rows,
                              int64_t                    cols,
                              int64_t                    ld,
                              uint32_t                   alignment,
                              cudaDataType               valueType,
                              cusparseOrder_t            order)

The function initializes the descriptor of a dense matrix.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

matDescr

Host

OUT

Dense matrix description

rows

Host

IN

Number of rows

cols

Host

IN

Number of columns

ld

Host

IN

Leading dimension

alignment

Host

IN

Memory alignment in bytes

valueType

Host

IN

Data type of the matrix

order

Host

IN

Memory layout

Constrains:

  • where valueType can be CUDA_R_16F, CUDA_R_16BF, CUDA_R_I8, CUDA_R_32F.

  • rows, cols, and ld must be a multiple of

    • 16 if valueType is CUDA_R_I8

    • 8 if valueType is CUDA_R_16F or CUDA_R_16BF

    • 4 if valueType is CUDA_R_32F

  • The total size of the matrix cannot exceed:

    • 2^{32}-1 elements for CUDA_R_8I

    • 2^{31}-1 elements for CUDA_R_16F or CUDA_R_16BF

    • 2^{30}-1 elements for CUDA_R_32F

See cusparseStatus_t for the description of the return status.


cusparseLtStructuredDescriptorInit

cusparseStatus_t
cusparseLtStructuredDescriptorInit(const cusparseLtHandle_t*  handle,
                                   cusparseLtMatDescriptor_t* matDescr,
                                   int64_t                    rows,
                                   int64_t                    cols,
                                   int64_t                    ld,
                                   uint32_t                   alignment,
                                   cudaDataType               valueType,
                                   cusparseOrder_t            order,
                                   cusparseLtSparsity_t       sparsity)

The function initializes the descriptor of a structured matrix.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

matDescr

Host

OUT

Dense matrix description

rows

Host

IN

Number of rows

cols

Host

IN

Number of columns

ld

Host

IN

Leading dimension

alignment

Host

IN

Memory alignment in bytes

valueType

Host

IN

Data type of the matrix

order

Host

IN

Memory layout

sparsity

Host

IN

Matrix sparsity ratio

Constrains:

  • where valueType can be CUDA_R_16F, CUDA_R_16BF, CUDA_R_I8, CUDA_R_32F.

  • rows, cols, and ld must be a multiple of

    • 16 if valueType is CUDA_R_I8

    • 8 if valueType is CUDA_R_16F or CUDA_R_16BF

    • 4 if valueType is CUDA_R_32F

  • The total size of the matrix cannot exceed:

    • 2^{32}-1 elements for CUDA_R_8I

    • 2^{31}-1 elements for CUDA_R_16F or CUDA_R_16BF

    • 2^{30}-1 elements for CUDA_R_32F

Sparsity ratio

Value

Description

CUSPARSELT_SPARSITY_50_PERCENT

50% Sparsity Ratio

See cusparseStatus_t for the description of the return status.


cusparseLtMatDescriptorDestroy

cusparseStatus_t
cusparseLtMatDescriptorDestroy(const cusparseLtMatDescriptor_t* matDescr)
The function releases the resources used by an instance of a matrix descriptor. After this call, the matrix descriptor, the matmul descriptor, and the plan can no longer be used.

Parameter

Memory

In/Out

Description

matDescr

Host

IN

Matrix descriptor

See cusparseStatus_t for the description of the return status.


Matmul Functions

cusparseLtMatmulDescriptorInit

cusparseStatus_t
cusparseLtMatmulDescriptorInit(const cusparseLtHandle_t*        handle,
                               cusparseLtMatmulDescriptor_t*    matMulDescr,
                               cusparseOperation_t              opA,
                               cusparseOperation_t              opB,
                               const cusparseLtMatDescriptor_t* matA,
                               const cusparseLtMatDescriptor_t* matB,
                               const cusparseLtMatDescriptor_t* matC,
                               const cusparseLtMatDescriptor_t* matD,
                               cusparseComputeType              computeType)

The function initializes the matrix multiplication descriptor.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

matMulDescr

Host

OUT

Matrix multiplication descriptor

opA

Host

IN

Operation applied to the matrix A

opB

Host

IN

Operation applied to the matrix B

matA

Host

IN

Structured or dense matrix descriptor A

matB

Host

IN

Structured or dense matrix descriptor B

matC

Host

IN

Dense matrix descriptor C

matD

Host

IN

Dense matrix descriptor D

computeType

Host

IN

Compute precision

The structured matrix descriptor can used for matA or matB but not both.

Data types Supported:

Input

Output

Compute

CUDA_R_16F

CUDA_R_16F

CUSPARSE_COMPUTE_16F

CUDA_R_16BF

CUDA_R_16BF

CUSPARSE_COMPUTE_16F

CUDA_R_8I

CUDA_R_8I

CUSPARSE_COMPUTE_32I

CUDA_R_32F

CUDA_R_32F

CUSPARSE_COMPUTE_TF32_FAST

CUDA_R_32F

CUDA_R_32F

CUSPARSE_COMPUTE_TF32

Constrains:

  • Given A of size m \times k, B of size k \times n, and C of size m \times n (regardless opA, opB), k must be a multiple of 32

  • CUDA_R_8I data type only supports (the opposite if B is structured):

    • opA/opB = TN if the matrix orders are orderA/orderB = Col/Col

    • opA/opB = NT if the matrix orders are orderA/orderB = Row/Row

    • opA/opB = NN if the matrix orders are orderA/orderB = Row/Col

    • opA/opB = TT if the matrix orders are orderA/orderB = Col/Row

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulAlgSelectionInit

cusparseStatus_t
cusparseLtMatmulAlgSelectionInit(const cusparseLtHandle_t*           handle,
                                 cusparseLtMatmulAlgSelection_t*     algSelection,
                                 const cusparseLtMatmulDescriptor_t* matmulDescr,
                                 cusparseLtMatmulAlg_t               alg)

The function initializes the algorithm selection descriptor.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

algSelection

Host

OUT

Algorithm selection descriptor

matMulDescr

Host

IN

Matrix multiplication descriptor

alg

Host

IN

Algorithm mode

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulAlgSetAttribute

cusparseStatus_t
cusparseLtMatmulAlgSetAttribute(const cusparseLtHandle_t*       handle,
                                cusparseLtMatmulAlgSelection_t* algSelection,
                                cusparseLtMatmulAlgAttribute_t  attribute,
                                const void*                     data,
                                size_t                          dataSize)

The function sets the value of the specified attribute belonging to algorithm selection descriptor.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

algSelection

Host

OUT

Algorithm selection descriptor

attribute

Host

IN

The attribute that will be set by this function

data

Host

IN

Pointer to the value to which the specified attribute will be set

dataSize

Host

IN

Size in bytes of the attribute value used for verification

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulAlgGetAttribute

cusparseStatus_t
cusparseLtMatmulAlgGetAttribute(const cusparseLtHandle_t*             handle,
                                const cusparseLtMatmulAlgSelection_t* algSelection,
                                cusparseLtMatmulAlgAttribute_t        attribute,
                                void*                                 data,
                                size_t                                dataSize)

The function returns the value of the queried attribute belonging to algorithm selection descriptor.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

algSelection

Host

IN

Algorithm selection descriptor

attribute

Host

IN

The attribute that will be retrieved by this function

data

Host

OUT

Memory address containing the attribute value retrieved by this function

dataSize

Host

IN

Size in bytes of the attribute value used for verification

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulGetWorkspace

cusparseStatus_t
cusparseLtMatmulGetWorkspace(const cusparseLtHandle_t*             handle,
                             const cusparseLtMatmulAlgSelection_t* algSelection,
                             size_t*                               workspaceSize)

The function determines the required workspace size associated to the selected algorithm.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

algSelection

Host

IN

Algorithm selection descriptor

workspaceSize

Host

OUT

Workspace size in bytes

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulPlanInit

cusparseStatus_t
cusparseLtMatmulPlanInit(const cusparseLtHandle_t*             handle,
                         cusparseLtMatmulPlan_t*               plan,
                         const cusparseLtMatmulDescriptor_t*   matmulDescr,
                         const cusparseLtMatmulAlgSelection_t* algSelection,
                         size_t                                workspaceSize)

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

plan

Host

OUT

Matrix multiplication plan

matMulDescr

Host

IN

Matrix multiplication descriptor

algSelection

Host

IN

Algorithm selection descriptor

workspaceSize

Host

IN

Workspace size in bytes

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulPlanDestroy

cusparseStatus_t
cusparseLtMatmulPlanDestroy(const cusparseLtMatmulPlan_t* plan)
The function releases the resources used by an instance of the matrix multiplication plan. This function is the last call with a specific plan instance.
Calling any cusparseLt function which uses cusparseLtMatmulPlan_t after cusparseLtMatmulPlanDestroy() will return an error.

Parameter

Memory

In/Out

Description

plan

Host

IN

Matrix multiplication plan

See cusparseStatus_t for the description of the return status.


cusparseLtMatmul

cusparseStatus_t
cusparseLtMatmul(const cusparseLtHandle_t*     handle,
                 const cusparseLtMatmulPlan_t* plan,
                 const void*                   alpha,
                 const void*                   d_A,
                 const void*                   d_B,
                 const void*                   beta,
                 const void*                   d_C,
                 void*                         d_D,
                 void*                         workspace,
                 cudaStream_t*                 streams,
                 int32_t                       numStreams)

The function computes the matrix multiplication of matrices A and B to produce the the output matrix D, according to the following operation:

D = \alpha op(A) * op(B) + \beta op(C)

where A, B, and C are input matrices, and \alpha and \beta are input scalars.
Note: The function currently only supports the case where D has the same shape of C

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

plan

Host

IN

Matrix multiplication plan

alpha

Host

IN

\alpha scalar used for multiplication (float data type)

d_A

Device

IN

Pointer to the structured or dense matrix A

d_B

Device

IN

Pointer to the structured or dense matrix B

beta

Host

IN

\beta scalar used for multiplication (float data type)

d_C

Device

OUT

Pointer to the dense matrix C

d_D

Device

OUT

Pointer to the dense matrix D

workspace

Device

IN

Pointer to workspace

streams

Host

IN

Pointer to CUDA stream array for the computation

numStreams

Host

IN

Number of CUDA streams in streams

Data types Supported:

Input

Output

Compute

CUDA_R_16F

CUDA_R_16F

CUSPARSE_COMPUTE_16F

CUDA_R_16BF

CUDA_R_16BF

CUSPARSE_COMPUTE_16F

CUDA_R_8I

CUDA_R_8I

CUSPARSE_COMPUTE_32I

CUDA_R_32F

CUDA_R_32F

CUSPARSE_COMPUTE_TF32_FAST

CUDA_R_32F

CUDA_R_32F

CUSPARSE_COMPUTE_TF32

  • CUSPARSE_COMPUTE_TF32 kernels perform the conversion from 32-bit IEEE754 floating-point to TensorFloat-32 by applying round toward plus infinity rounding mode before the computation.

  • CUSPARSE_COMPUTE_TF32_FAST kernels suppose that the data are already represented in TensorFloat-32 (32-bit per value). If 32-bit IEEE754 floating-point are used as input, the values are truncated to TensorFloat-32 before the computation.

  • CUSPARSE_COMPUTE_TF32_FAST kernels provide better performance than CUSPARSE_COMPUTE_TF32 but could produce less accurate results.

The structured matrix A or B (compressed) must respect the following constrains depending on the operation applied on it:

  • For op = CUSPARSE_NON_TRANSPOSE

    • CUDA_R_16F, CUDA_R_16BF, CUDA_R_8I each row must have at least two non-zero values every four elements

    • CUDA_R_32F each row must have at least one non-zero value every two elements

  • For op = CUSPARSE_TRANSPOSE

    • CUDA_R_16F, CUDA_R_16BF, CUDA_R_8I each column must have at least two non-zero values every four elements

    • CUDA_R_32F each column must have at least one non-zero value every two elements

The correctness of the pruning result (matrix A/B) can be check with the function cusparseLtSpMMAPruneCheck().

Constrains:

  • All pointers must be aligned to 16 bytes

Properties

  • The routine requires no extra storage

  • The routine supports asynchronous execution with respect to streams[0]

  • Provides deterministic (bit-wise) results for each run

cusparseLtMatmul supports the following optimizations:

  • CUDA graph capture

  • Hardware Memory Compression

See cusparseStatus_t for the description of the return status.


cusparseLtMatmulSearch

cusparseStatus_t
cusparseLtMatmulSearch(const cusparseLtHandle_t* handle,
                       cusparseLtMatmulPlan_t*   plan,
                       const void*               alpha,
                       const void*               d_A,
                       const void*               d_B,
                       const void*               beta,
                       const void*               d_C,
                       void*                     d_D,
                       void*                     workspace,
                       cudaStream_t*             streams,
                       int32_t                   numStreams)
The function evaluates all available algorithms for the matrix multiplication and automatically updates the plan by selecting the fastest one. The functionality is intended to be used for auto-tuning purposes when the same operation is repeated multiple times over different inputs.
The function behavior is the same of cusparseLtMatmul().

Helper Functions

cusparseLtSpMMAPrune

cusparseStatus_t
cusparseLtSpMMAPrune(const cusparseLtHandle_t*           handle,
                     const cusparseLtMatmulDescriptor_t* matmulDescr,
                     const void*                         d_in,
                     void*                               d_out,
                     cusparseLtPruneAlg_t                pruneAlg,
                     cudaStream_t                        stream)

The function prunes a dense matrix d_in according to the specified algorithm pruneAlg.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

matMulDescr

Host

IN

Matrix multiplication descriptor

d_in

Device

IN

Pointer to the dense matrix

d_out

Device

OUT

Pointer to the pruned matrix

pruneAlg

Device

IN

Pruning algorithm

stream

Host

IN

CUDA stream for the computation

Properties

  • The routine requires no extra storage

  • The routine supports asynchronous execution with respect to stream

  • Provides deterministic (bit-wise) results for each run

cusparseLtSpMMAPrune supports the following optimizations:

  • CUDA graph capture

  • Hardware Memory Compression

See cusparseStatus_t for the description of the return status.


cusparseLtSpMMAPrune2

cusparseStatus_t
cusparseLtSpMMAPrune2(const cusparseLtHandle_t*        handle,
                      const cusparseLtMatDescriptor_t* sparseMatDescr,
                      int                              isSparseA,
                      cusparseOperation_t              op,
                      const void*                      d_in,
                      void*                            d_out,
                      cusparseLtPruneAlg_t             pruneAlg,
                      cudaStream_t                     stream);

The function prunes a dense matrix d_in according to the specified algorithm pruneAlg.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

isSparseA

Host

IN

Specify if the structured (sparse) matrix is in the first position (matA or matB)

op

Host

IN

Operation that will be applied to the structured (sparse) matrix in the multiplication

d_in

Device

IN

Pointer to the dense matrix

d_out

Device

OUT

Pointer to the pruned matrix

pruneAlg

Device

IN

Pruning algorithm

stream

Host

IN

CUDA stream for the computation

The function has the same properties of cusparseLtSpMMAPrune()


cusparseLtSpMMAPruneCheck

cusparseStatus_t
cusparseLtSpMMAPruneCheck(const cusparseLtHandle_t*           handle,
                          const cusparseLtMatmulDescriptor_t* matmulDescr,
                          const void*                         d_in,
                          int*                                d_valid,
                          cudaStream_t                        stream)

The function checks the correctness of the pruning structure for a given matrix.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

matMulDescr

Host

IN

Matrix multiplication descriptor

d_in

Device

IN

Pointer to the matrix to check

d_valid

Device

OUT

Validation results (0 correct, 1 wrong)

stream

Host

IN

CUDA stream for the computation

See cusparseStatus_t for the description of the return status.


cusparseLtSpMMAPruneCheck2

cusparseStatus_t
cusparseLtSpMMAPruneCheck2(const cusparseLtHandle_t*        handle,
                           const cusparseLtMatDescriptor_t* sparseMatDescr,
                           int                              isSparseA,
                           cusparseOperation_t              op,
                           const void*                      d_in,
                           int*                             d_valid,
                           cudaStream_t                     stream)

The function checks the correctness of the pruning structure for a given matrix.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

isSparseA

Host

IN

Specify if the structured (sparse) matrix is in the first position (matA or matB)

op

Host

IN

Operation that will be applied to the structured (sparse) matrix in the multiplication

d_in

Device

IN

Pointer to the matrix to check

d_valid

Device

OUT

Validation results (0 correct, 1 wrong)

stream

Host

IN

CUDA stream for the computation

The function has the same properties of cusparseLtSpMMAPruneCheck()


cusparseLtSpMMACompressedSize

cusparseStatus_t
cusparseLtSpMMACompressedSize(const cusparseLtHandle_t*     handle,
                              const cusparseLtMatmulPlan_t* plan,
                              size_t*                       compressedSize)

The function provides the size of the compressed matrix to be allocated before calling cusparseLtSpMMACompress().

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

plan

Host

IN

Matrix plan descriptor

compressedSize

Host

OUT

Size in bytes of the compressed matrix

See cusparseStatus_t for the description of the return status.


cusparseLtSpMMACompressedSize2

cusparseStatus_t
cusparseLtSpMMACompressedSize2(const cusparseLtHandle_t*        handle,
                               const cusparseLtMatDescriptor_t* sparseMatDescr,
                               size_t*                          compressedSize)

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

sparseMatDescr

Host

IN

Structured (sparse) matrix descriptor

compressedSize

Host

OUT

Size in bytes of the compressed matrix

The function has the same properties of cusparseLtSpMMACompressedSize()


cusparseLtSpMMACompress

cusparseStatus_t
cusparseLtSpMMACompress(const cusparseLtHandle_t*     handle,
                        const cusparseLtMatmulPlan_t* plan,
                        const void*                   d_dense,
                        void*                         d_compressed,
                        cudaStream_t                  stream)

The function compresses a dense matrix d_dense. The compressed matrix is intended to be used as the first/second operand A/B in the cusparseLtMatmul() function.

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

plan

Host

IN

Matrix multiplication plan

d_dense

Device

IN

Pointer to the dense matrix

d_compressed

Device

OUT

Pointer to the compressed matrix

stream

Host

IN

CUDA stream for the computation

Properties

  • The routine requires no extra storage

  • The routine supports asynchronous execution with respect to stream

  • Provides deterministic (bit-wise) results for each run

cusparseLtSpMMACompress supports the following optimizations:

  • CUDA graph capture

  • Hardware Memory Compression

See cusparseStatus_t for the description of the return status.


cusparseLtSpMMACompress2

cusparseStatus_t
cusparseLtSpMMACompress2(const cusparseLtHandle_t*        handle,
                         const cusparseLtMatDescriptor_t* sparseMatDescr,
                         int                              isSparseA,
                         cusparseOperation_t              op,
                         const void*                      d_dense,
                         void*                            d_compressed,
                         cudaStream_t                     stream)

Parameter

Memory

In/Out

Description

handle

Host

IN

cuSPARSELt library handle

isSparseA

Host

IN

Specify if the structured (sparse) matrix is in the first position (matA or matB)

op

Host

IN

Operation that will be applied to the structured (sparse) matrix in the multiplication

d_dense

Device

IN

Pointer to the dense matrix

d_compressed

Device

OUT

Pointer to the compressed matrix

stream

Host

IN

CUDA stream for the computation

The function has the same properties of cusparseLtSpMMACompress()