Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing (1703.08642v2)

Published 25 Mar 2017 in cs.IT and math.IT

Abstract: We study the question of extracting a sequence of functions ${\boldsymbol{f}i, \boldsymbol{g}_i}{i=1}s$ from observing only the sum of their convolutions, i.e., from $\boldsymbol{y} = \sum_{i=1}s \boldsymbol{f}_i\ast \boldsymbol{g}_i$. While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper, we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things.

Citations (50)

Summary

We haven't generated a summary for this paper yet.