Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 102 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 25 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 472 tok/s Pro
Kimi K2 196 tok/s Pro
2000 character limit reached

Analysis vs. synthesis sparsity for $α$-shearlets (1702.03559v1)

Published 12 Feb 2017 in math.FA

Abstract: There are two notions of sparsity associated to a frame $\Psi=(\psi_i){i\in I}$: Analysis sparsity of $f$ means that the analysis coefficients $(\langle f,\psi_i\rangle)_i$ are sparse, while synthesis sparsity means that $f=\sum_i c_i\psi_i$ with sparse coefficients $(c_i)_i$. Here, sparsity of $c=(c_i)_i$ means $c\in\ellp(I)$ for a given $p<2$. We show that both notions of sparsity coincide if $\Psi={\rm SH}(\varphi,\psi;\delta)$ is a discrete (cone-adapted) shearlet frame with 'nice' generators $\varphi,\psi$ and fine enough sampling density $\delta>0$. The required 'niceness' is explicitly quantified in terms of Fourier-decay and vanishing moment conditions. Precisely, we show that suitable shearlet systems simultaneously provide Banach frames and atomic decompositions for the shearlet smoothness spaces $\mathscr{S}_s{p,q}$ introduced by Labate et al. Hence, membership in $\mathscr{S}_s{p,q}$ is simultaneously equivalent to analysis sparsity and to synthesis sparsity w.r.t. the shearlet frame. As an application, we prove that shearlets yield (almost) optimal approximation rates for cartoon-like functions $f$: If $\epsilon>0$, then $\Vert f-f_N\Vert{L2}\lesssim N{-(1-\epsilon)}$, where $f_N$ is a linear combination of N shearlets. This might appear to be well-known, but the existing proofs only establish this approximation rate w.r.t. the dual $\tilde{\Psi}$ of $\Psi$, not w.r.t. $\Psi$ itself. This is not completely satisfying, since the properties of $\tilde{\Psi}$ (decay, smoothness, etc.) are largely unknown. We also consider $\alpha$-shearlet systems. For these, the shearlet smoothness spaces have to be replaced by $\alpha$-shearlet smoothness spaces. We completely characterize the embeddings between these spaces, allowing us to decide whether sparsity w.r.t. $\alpha_1$-shearlets implies sparsity w.r.t. $\alpha_2$-shearlets.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.