Improved Sparse Recovery for Approximate Matrix Multiplication
Abstract: We present a simple randomized algorithm for approximate matrix multiplication (AMM) whose error scales with the output norm $|AB|F$. Given any $n\times n$ matrices $A,B$ and a runtime parameter $r\leq n$, the algorithm produces in $O(n2(r+\log n))$ time, a matrix $C$ with total squared error $\mathbb{E}[|C-AB|_F2]\le (1-\frac{r}{n})|AB|_F2$, per-entry variance $|AB|_F2/n2$ and bias $\mathbb{E}[C]=\frac{r}{n}AB$. Alternatively, the algorithm can compute an unbiased estimation with expected total squared error $\frac{n}{r}|{AB}|{F}2$, recovering the state-of-art AMM error obtained by Pagh's TensorSketch algorithm (Pagh, 2013). Our algorithm is a log-factor faster. The key insight in the algorithm is a new variation of pseudo-random rotation of the input matrices (a Fast Hadamard Transform with asymmetric diagonal scaling), which redistributes the Frobenius norm of the output $AB$ uniformly across its entries.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.