cuSolver aims to provide GPU-accelarated ScaLAPACK-like tools for solving systems of linear equations and eigenvalue and singular value problems.
cusolverMp leverages the 2D block cyclic data layout for load balancing and to maximize compatibility with ScaLAPACK routines.
The library assumes data is available on the device memory. It is the responsibility of the developer to allocate memory and to copy data between GPU memory and CPU memory using standard CUDA runtime API routines, such as
cusolverMp is designed to bootstrap user’s MPI communicator. The bootstrapped communicator is then used to intialize matrix descriptors and provide some basic synchronization mechanisms.
Currently, cusolverMp computational routines are blocking with respect to the host. Once the routine finishes it will return the control to the user and the result will be available on the device without further synchronisation required. This constraint will be relaxed in future releases.
Data Layout of Local Matrices¶
cusolverMp assumes that local matrices are stored in column-major format.
cusolverMp’s workflow can be broken down as follows:
1. Bootstrap MPI communicator: cal_comm_create_distr().2. Initialize the library handle: cusolverMpCreate().3. Initialize grid descriptors: cusolverMpCreateDeviceGrid().4. Initialize matrix descriptors: cusolverMpCreateMatrixDesc().5. Query the host and device buffer sizes for a given routine.6. Allocate host and device workspace buffers for a given routine.6. Execute the routine to perform the desired computation.7. Synchronize local stream to make sure the result is available, if required: cal_stream_sync().8. Deallocate host and device workspace.9. Destroy matrix descriptors: cusolverMpDestroyMatrixDesc().10. Destroy grid descriptors: cusolverMpDestroyGrid().11. Destroy cusolverMp library handle: cusolverMpDestroy().12. Destroy CAL library handle: cal_comm_destroy().