Papers
Topics
Authors
Recent
Search
2000 character limit reached

Improved Sparse Recovery for Approximate Matrix Multiplication

Published 4 Feb 2026 in cs.DS | (2602.04386v1)

Abstract: We present a simple randomized algorithm for approximate matrix multiplication (AMM) whose error scales with the output norm $|AB|F$. Given any $n\times n$ matrices $A,B$ and a runtime parameter $r\leq n$, the algorithm produces in $O(n2(r+\log n))$ time, a matrix $C$ with total squared error $\mathbb{E}[|C-AB|_F2]\le (1-\frac{r}{n})|AB|_F2$, per-entry variance $|AB|_F2/n2$ and bias $\mathbb{E}[C]=\frac{r}{n}AB$. Alternatively, the algorithm can compute an unbiased estimation with expected total squared error $\frac{n}{r}|{AB}|{F}2$, recovering the state-of-art AMM error obtained by Pagh's TensorSketch algorithm (Pagh, 2013). Our algorithm is a log-factor faster. The key insight in the algorithm is a new variation of pseudo-random rotation of the input matrices (a Fast Hadamard Transform with asymmetric diagonal scaling), which redistributes the Frobenius norm of the output $AB$ uniformly across its entries.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.