Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stable recovery guarantees for blind deconvolution under random mask assumption (2503.03765v1)

Published 27 Feb 2025 in cs.IT, math.FA, and math.IT

Abstract: This study addresses the blind deconvolution problem with modulated inputs, focusing on a measurement model where an unknown blurring kernel $\boldsymbol{h}$ is convolved with multiple random modulations ${\boldsymbol{d}l}{l=1}{L}$(coded masks) of a signal $\boldsymbol{x}$, subject to $\ell_2$-bounded noise. We introduce a more generalized framework for coded masks, enhancing the versatility of our approach. Our work begins within a constrained least squares framework, where we establish a robust recovery bound for both $\boldsymbol{h}$ and $\boldsymbol{x}$, demonstrating its near-optimality up to a logarithmic factor. Additionally, we present a new recovery scheme that leverages sparsity constraints on $\boldsymbol{x}$. This approach significantly reduces the sampling complexity to the order of $L=O(\log n)$ when the non-zero elements of $\boldsymbol{x}$ are sufficiently separated. Furthermore, we demonstrate that incorporating sparsity constraints yields a refined error bound compared to the traditional constrained least squares model. The proposed method results in more robust and precise signal recovery, as evidenced by both theoretical analysis and numerical simulations. These findings contribute to advancing the field of blind deconvolution and offer potential improvements in various applications requiring signal reconstruction from modulated inputs.

Summary

We haven't generated a summary for this paper yet.