GLU3.0: Fast GPU-based Parallel Sparse LU Factorization for Circuit Simulation

1 Aug 2019  ·  Shaoyi Peng, Sheldon X. -D. Tan ·

LU factorization for sparse matrices is the most important computing step for many engineering and scientific computing problems such as circuit simulation. But parallelizing LU factorization with the Graphic Processing Units (GPU) still remains a challenging problem due to high data dependency and irregular memory accesses. Recently GPU-based hybrid right-looking sparse LU solver, called GLU (1.0 and 2.0), has been proposed to exploit the fine grain level parallelism of GPU. However, a new type of data dependency (called double-U dependency) introduced by GLU slows down the preprocessing step. Furthermore, GLU uses fixed GPU thread allocation strategy, which limits the parallelism. In this article, we propose a new GPU-based sparse LU factorization method, called {\it GLU3.0}, which solves the aforementioned problems. First, it introduces a much more efficient data dependency detection algorithm. Second, we observe that the potential parallelism is different as the matrix factorization goes on. We then develop three different modes of GPU kernel which adapt to different stages to accommodate the computing task changes in the factorization. Experimental results on circuit matrices from University of Florida Sparse Matrix Collection (UFL) show that GLU3.0 delivers 2-3 orders of magnitude speedup over GLU2.0 for the data dependency detection. Furthermore, GLU3.0 achieve 13.0 $\times$ (arithmetic mean) or 6.7$\times$ (geometric mean) speedup over GLU2.0 and 7.1$\times$ (arithmetic mean) or 4.8 $\times$ (geometric mean) over the recently proposed enhanced GLU2.0 sparse LU solver on the same set of circuit matrices.

PDF Abstract

Categories


Distributed, Parallel, and Cluster Computing Data Structures and Algorithms

Datasets


  Add Datasets introduced or used in this paper