Getting Started

Introduction

cuSolver aims to provide GPU-accelarated ScaLAPACK-like tools for solving systems of linear equations and eigenvalue and singular value problems.
cuSOLVERMp leverages the 2D block cyclic data layout for load balancing and to maximize compatibility with ScaLAPACK routines.
The library assumes data is available on the device memory. It is the responsibility of the developer to allocate memory and to copy data between GPU memory and CPU memory using standard CUDA runtime API routines, such as cudaMalloc(), cudaFree(), cudaMemcpy(), and cudaMemcpyAsync().

Synchronous Execution

Currently, cuSOLVERMp computational routines are blocking with respect to the host. Once the routine finishes it will return the control to the user and the result will be available on the device without further synchronisation required. This constraint will be relaxed in future releases.

Data Layout of Local Matrices

cuSOLVERMp assumes that local matrices are stored in column-major format.

Workflow

cuSOLVERMp’s workflow can be broken down as follows:
1. Bootstrap CAL communicator: cal_comm_create().
2. Initialize the library handle: cusolverMpCreate().
3. Initialize grid descriptors: cusolverMpCreateDeviceGrid().
4. Initialize matrix descriptors: cusolverMpCreateMatrixDesc().
5. Query the host and device buffer sizes for a given routine.
6. Allocate host and device workspace buffers for a given routine.
6. Execute the routine to perform the desired computation.
7. Synchronize local stream to make sure the result is available, if required: cal_stream_sync().
8. Deallocate host and device workspace.
9. Destroy matrix descriptors: cusolverMpDestroyMatrixDesc().
10. Destroy grid descriptors: cusolverMpDestroyGrid().
11. Destroy cuSOLVERMp library handle: cusolverMpDestroy().
12. Destroy CAL library handle: cal_comm_destroy().

Code Samples

Code samples can be found in the CUDALibrarySamples repository.