Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Approximation is Provably Hard under Coherent Dictionaries (1702.02885v1)

Published 9 Feb 2017 in cs.CC, cs.IT, and math.IT

Abstract: It is well known that sparse approximation problem is \textsf{NP}-hard under general dictionaries. Several algorithms have been devised and analyzed in the past decade under various assumptions on the \emph{coherence} $\mu$ of the dictionary represented by an $M \times N$ matrix from which a subset of $k$ column vectors is selected. All these results assume $\mu=O(k{-1})$. This article is an attempt to bridge the big gap between the negative result of \textsf{NP}-hardness under general dictionaries and the positive results under this restrictive assumption. In particular, it suggests that the aforementioned assumption might be asymptotically the best one can make to arrive at any efficient algorithmic result under well-known conjectures of complexity theory. In establishing the results, we make use of a new simple multilayered PCP which is tailored to give a matrix with small coherence combined with our reduction.

Summary

We haven't generated a summary for this paper yet.