Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

For-all Sparse Recovery in Near-Optimal Time (1402.1726v2)

Published 7 Feb 2014 in cs.DS, cs.IT, and math.IT

Abstract: An approximate sparse recovery system in $\ell_1$ norm consists of parameters $k$, $\epsilon$, $N$, an $m$-by-$N$ measurement $\Phi$, and a recovery algorithm, $\mathcal{R}$. Given a vector, $\mathbf{x}$, the system approximates $x$ by $\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x})$, which must satisfy $|\widehat{\mathbf{x}}-\mathbf{x}|_1 \leq (1+\epsilon)|\mathbf{x}-\mathbf{x}_k|_1$. We consider the 'for all' model, in which a single matrix $\Phi$, possibly 'constructed' non-explicitly using the probabilistic method, is used for all signals $\mathbf{x}$. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses $O(\epsilon{-3} k\log(N/k))$ measurements and runs in time $O(k{1-\alpha}N\alpha)$ for any constant $\alpha > 0$. In this paper, we improve the number of measurements to $O(\epsilon{-2} k \log(N/k))$, matching the best existing upper bound (attained by super-linear algorithms), and the runtime to $O(k{1+\beta}\textrm{poly}(\log N,1/\epsilon))$, with a modest restriction that $\epsilon \leq (\log k/\log N){\gamma}$, for any constants $\beta,\gamma > 0$. When $k\leq \logc N$ for some $c>0$, the runtime is reduced to $O(k\textrm{poly}(N,1/\epsilon))$. With no restrictions on $\epsilon$, we have an approximation recovery system with $m = O(k/\epsilon \log(N/k)((\log N/\log k)\gamma + 1/\epsilon))$ measurements.

Citations (36)

Summary

We haven't generated a summary for this paper yet.