Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Approximation is Hard (1108.4664v3)

Published 23 Aug 2011 in cs.CC, cs.IT, and math.IT

Abstract: Given a redundant dictionary $\Phi$, represented by an $M \times N$ matrix ($\Phi \in \mathbb{R}{M \times N}$) and a target signal $y \in \mathbb{R}M$, the \emph{sparse approximation problem} asks to find an approximate representation of $y$ using a linear combination of at most $k$ atoms. In this paper, a new complexity theoretic hardness result for sparse approximation problem is presented via considering a different measure of quality for the solution. It is argued that, from an algorithmic standpoint, the problem is more meaningful if it asks to maximize the norm of the target signal's projection onto the selected atoms which are represented by column vectors. Then, a multiplicative inapproximability result is established with this new measure, under a reasonable complexity theoretic assumption. This result in turn implies additive inapproximability for the problem with the standard measure. Specifically, if $ZPP \neq NP$, all polynomial time algorithms which provide a $k$-sparse vector $x$ should satisfy $$ {|y-\Phi x|}_22 \geq (1-c){|y-\Phi x*|}_22 + c {|y|}_22, $$ \noindent for $1/4(1-1/e) > c \geq 0$ where $x*$ is the optimal $k$-sparse solution. This result provides a quantification of the hardness for the case $y-\Phi x* = 0$, revealing more details about the inherent structure of the problem.

Summary

We haven't generated a summary for this paper yet.