Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
48 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
104 tokens/sec
DeepSeek R1 via Azure Premium
77 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

A Non-Convex Optimization Technique for Sparse Blind Deconvolution -- Initialization Aspects and Error Reduction Properties (1708.07370v2)

Published 24 Aug 2017 in cs.IT and math.IT

Abstract: Sparse blind deconvolution is the problem of estimating the blur kernel and sparse excitation, both of which are unknown. Considering a linear convolution model, as opposed to the standard circular convolution model, we derive a sufficient condition for stable deconvolution. The columns of the linear convolution matrix form a Riesz basis with the tightness of the Riesz bounds determined by the autocorrelation of the blur kernel. Employing a Bayesian framework results in a non-convex, non-smooth cost function consisting of an $\ell_2$ data-fidelity term and a sparsity promoting $\ell_p$-norm ($0 \le p \le 1$) regularizer. Since the $\ell_p$-norm is not differentiable at the origin, we employ an $\epsilon$-regularized $\ell_p$-norm as a surrogate. The data term is also non-convex in both the blur kernel and excitation. An iterative scheme termed alternating minimization (Alt. Min.) $\ell_p-\ell_2$ projections algorithm (ALPA) is developed for optimization of the $\epsilon$-regularized cost function. Further, we demonstrate that, in every iteration, the $\epsilon$-regularized cost function is non-increasing and more importantly, bounds the original $\ell_p$-norm-based cost. Due to non-convexity of the cost, the accuracy of estimation is largely influenced by the initialization. Considering regularized least-squares estimate as the initialization, we analyze how the initialization errors are concentrated, first in Gaussian noise, and then in bounded noise, the latter case resulting in tighter bounds. Comparisons with state-of-the-art blind deconvolution algorithms show that the deconvolution accuracy is higher in case of ALPA. In the context of natural speech signals, ALPA results in accurate deconvolution of a voiced speech segment into a sparse excitation and smooth vocal tract response.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.