SparseZipper: Enhancing Matrix Extensions to Accelerate SpGEMM on CPUs
Abstract: The importance of general matrix multiplication (GEMM) is motivating new instruction set extensions for multiplying dense matrices in almost all contemporary ISAs, and these extensions are often implemented using high-performance systolic arrays. However, matrices in emerging workloads are not always dense, and sparse matrices where the vast majority of values are zeros are becoming more common. Existing matrix extensions and micro-architectures cannot efficiently process highly sparse matrices due to two reasons: (1) wasted work when one or both input values are zero; and (2) incompatibility with sparse matrix formats. This work proposes SparseZipper that minimally modifies existing matrix extensions and systolic-array-based micro-architectures specialized for dense-dense GEMM to accelerate sparse-sparse GEMM operating on highly sparse matrices with unstructured sparsity structures. Our performance evaluation shows SparseZipper achieves 5.98x and 2.61x speedup over a scalar hash-based implementation of SpGEMM and a state-of-the-art vectorized SpGEMM version, respectively. Our component-level area evaluation shows SparseZipper increases the area of a baseline 16x16 systolic array by only 12.7% resulting in an area overhead for an entire system-on-chip of just a few percent.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.