Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Matrix to Matrix Multiplication: A Representation and Architecture for Acceleration (long version) (1906.00327v1)

Published 2 Jun 2019 in cs.AR

Abstract: Accelerators for sparse matrix multiplication are important components in emerging systems. In this paper, we study the main challenges of accelerating Sparse Matrix Multiplication (SpMM). For the situations that data is not stored in the desired order (row/column order), we propose a compact high performance sparse format, which allows for random access to a dataset with low memory access overhead. We show that using this format results in a 14-49 times speedup for SpMM. Next, we propose a high performance systolic architecture for SpMM, which uses a mesh of comparators to locate the useful (non-zero) computation. This design maximizes data reuse by sharing the input data among a row/column of the mesh. We also show that, with similar memory access assumptions, the proposed architecture results in a 9-30 times speedup in comparison with the state of the art.

Citations (3)

Summary

We haven't generated a summary for this paper yet.