Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Matrix Multiplication: Limitations of the Laser Method (1411.5414v1)

Published 20 Nov 2014 in cs.CC and cs.DS

Abstract: Until a few years ago, the fastest known matrix multiplication algorithm, due to Coppersmith and Winograd (1990), ran in time $O(n{2.3755})$. Recently, a surge of activity by Stothers, Vassilevska-Williams, and Le Gall has led to an improved algorithm running in time $O(n{2.3729})$. These algorithms are obtained by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd. We show that this exact approach cannot result in an algorithm with running time $O(n{2.3725})$, and identify a wide class of variants of this approach which cannot result in an algorithm with running time $O(n{2.3078})$; in particular, this approach cannot prove the conjecture that for every $\epsilon > 0$, two $n\times n$ matrices can be multiplied in time $O(n{2+\epsilon})$. We describe a new framework extending the original laser method, which is the method underlying the previously mentioned algorithms. Our framework accommodates the algorithms by Coppersmith and Winograd, Stothers, Vassilevska-Williams and Le Gall. We obtain our main result by analyzing this framework. The framework is also the first to explain why taking tensor powers of the Coppersmith-Winograd identity results in faster algorithms.

Citations (77)

Summary

We haven't generated a summary for this paper yet.