Compressing rank-structured matrices via randomized sampling (1503.07152v1)
Abstract: Randomized sampling has recently been proven a highly efficient technique for computing approximate factorizations of matrices that have low numerical rank. This paper describes an extension of such techniques to a wider class of matrices that are not themselves rank-deficient, but have off-diagonal blocks that are; specifically, the classes of so called \textit{Hierarchically Off-Diagonal Low Rank (HODLR)} matrices and \textit{Hierarchically Block Separable (HBS)} matrices. Such matrices arise frequently in numerical analysis and signal processing, in particular in the construction of fast methods for solving differential and integral equations numerically. These structures admit algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc.) to be performed very rapidly; but only once a data-sparse representation of the matrix has been constructed. How to rapidly compute this representation in the first place is much less well understood. The present paper demonstrates that if an $N\times N$ matrix can be applied to a vector in $O(N)$ time, and if the ranks of the off-diagonal blocks are bounded by an integer $k$, then the cost for constructing a HODLR representation is $O(k{2}\,N\,(\log N){2})$, and the cost for constructing an HBS representation is $O(k{2}\,N\,\log N)$. The point is that when legacy codes (based on, e.g., the Fast Multipole Method) can be used for the fast matrix-vector multiply, the proposed algorithm can be used to obtain the data-sparse representation of the matrix, and then well-established techniques for HODLR/HBS matrices can be used to invert or factor the matrix. The proposed scheme is also useful in simplifying the implementation of certain operations on rank-structured matrices such as the matrix-matrix multiplication, low-rank update, addition, etc.
Collections
Sign up for free to add this paper to one or more collections.