Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unrolled Compressed Blind-Deconvolution (2209.14165v2)

Published 28 Sep 2022 in eess.SP and cs.LG

Abstract: The problem of sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging. To reduce its computational and implementation cost, we propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time. The proposed compression measures the signal through a filter followed by a subsampling, allowing for a significant reduction in implementation cost. We derive theoretical guarantees for the identifiability and recovery of a sparse filter from compressed measurements. Our results allow for the design of a wide class of compression filters. We, then, propose a data-driven unrolled learning framework to learn the compression filter and solve the S-MBD problem. The encoder is a recurrent inference network that maps compressed measurements into an estimate of sparse filters. We demonstrate that our unrolled learning method is more robust to choices of source shapes and has better recovery performance compared to optimization-based methods. Finally, in data-limited applications (fewshot learning), we highlight the superior generalization capability of unrolled learning compared to conventional deep learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. M. A. Herman and T. Strohmer, “High-resolution radar via compressed sensing,” IEEE trans. signal process., vol. 57, no. 6, pp. 2275–2284, 2009.
  2. K. Nose-Filho, A. K. Takahata, R. Lopes, and J. M. T. Romano, “Improving sparse multichannel blind deconvolution with correlated seismic data: Foundations and further results,” IEEE Signal Process. Mag., vol. 35, no. 2, pp. 41–50, Mar. 2018.
  3. G. Carter, “Time delay estimation for passive sonar signal processing,” IEEE Trans. Acoust., Speech, Signal Process., vol. 29, no. 3, pp. 463–470, Jun. 1981.
  4. R. Tur, Y. C. Eldar, and Z. Friedman, “Innovation rate sampling of pulse streams with application to ultrasound imaging,” IEEE Trans. Signal Process., vol. 59, no. 4, pp. 1827–1842, Apr. 2011.
  5. N. Wagner, Y. C. Eldar, and Z. Friedman, “Compressed beamforming in ultrasound imaging,” IEEE Trans. Signal Process., vol. 60, no. 9, pp. 4643–4657, Sep. 2012.
  6. W. U. Bajwa, K. Gedalyahu, and Y. C. Eldar, “Identification of parametric underspread linear systems and super-resolution radar,” IEEE Trans. Signal Process., vol. 59, no. 6, pp. 2548–2561, Jun. 2011.
  7. O. Bar-Ilan and Y. C. Eldar, “Sub-Nyquist radar via Doppler focusing,” IEEE Trans. Signal Process., vol. 62, no. 7, pp. 1796–1811, Apr. 2014.
  8. L. Wang and Y. Chi, “Blind deconvolution from multiple sparse inputs,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1384–1388, Oct. 2016.
  9. N. Kazemi and M. D. Sacchi, “Sparse multichannel blind deconvolution,” Geophysics, vol. 79, no. 5, pp. V143–V152, 2014.
  10. C. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Convex optimization approaches for blind sensor calibration using sparsity,” IEEE Trans. Signal Process., vol. 62, no. 18, pp. 4847–4856, Sep. 2014.
  11. R. Gribonval, G. Chardon, and L. Daudet, “Blind calibration for compressed sensing by convex optimization,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), Mar. 2012, pp. 2713–2716.
  12. S. Mulleti, K. Lee, and Y. C. Eldar, “Identifiability conditions for compressive multichannel blind deconvolution,” IEEE Trans. Signal Process., vol. 68, pp. 4627–4642, 2020.
  13. C. Garcia-Cardona and B. Wohlberg, “Convolutional dictionary learning: A comparative review and new algorithms,” IEEE Trans. Comput. Imag., vol. 4, no. 3, pp. 366–381, 2018.
  14. Y. Li, K. Lee, and Y. Bresler, “Identifiability in bilinear inverse problems with applications to subspace or sparsity-constrained blind gain and phase calibration,” IEEE Trans. Info. Theory, vol. 63, no. 2, pp. 822–842, Feb. 2017.
  15. A. Cosse, “A note on the blind deconvolution of multiple sparse signals from unknown subspaces,” Proc. SPIE, vol. 10394, 2017.
  16. B. Tolooshams, S. Dey, and D. Ba, “Scalable convolutional dictionary learning with constrained recurrent sparse auto-encoders,” in Proc. Workshop on Machine Learning for Signal Process. (MLSP), 2018, pp. 1–6.
  17. B. Tolooshams, S. Dey, and D. Ba, “Deep residual autoencoders for expectation maximization-inspired dictionary learning,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2020.
  18. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” arXiv:1912.10557, 2019.
  19. N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning: On the intersection of deep learning and optimization,” arXiv:2205.02640, 2022.
  20. T. Chang, B. Tolooshams, and D. Ba, “Randnet: Deep learning with compressed measurements of images,” in Proc. Workshop on Machine Learning for Signal Process. (MLSP), 2019, pp. 1–6.
  21. Y. Li, K. Lee, and Y. Bresler, “Blind gain and phase calibration via sparse spectral methods,” IEEE Trans. Info. Theory, vol. 65, no. 5, pp. 3097–3123, May 2019.
  22. S. Mulleti, H. Zhang, and Y. C. Eldar, “Learning to sample for sparse signals,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), 2022, pp. 3363–3367.
  23. B. Tolooshams, S. Mulleti, D. Ba, and Y. C. Eldar, “Unfolding neural networks for compressive multichannel blind deconvolution,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), 2021, pp. 2890–2894.
  24. L. Balzano and R. Nowak, “Blind calibration of sensor networks,” in In Proc. Int. Sym. Info. Process. Sensor Net., Apr. 2007, pp. 79–88.
  25. A. Mousavi, G. Dasarathy, and R. G. Baraniuk, “Deepcodec: Adaptive sensing and recovery via deep convolutional neural networks,” in Annual Allerton Conf. Commu., Control, Computing (Allerton), 2017, pp. 744–744.
  26. A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), 2017, pp. 2272–2276.
  27. B. Tolooshams, A. Song, S. Temereanca, and D. Ba, “Convolutional dictionary learning based auto-encoders for natural exponential-family distributions,” in Proc. Int. Conf. Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 9493–9503.
  28. I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Applied Math., vol. 57, no. 11, pp. 1413–1457, 2004.
  29. B. Tolooshams and D. E. Ba, “Stable and interpretable unrolled dictionary learning,” Trans. Machine Learning Research, 2022.
  30. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. Int. Conf. Machine Lerning (ICML), 2010, pp. 399–406.
  31. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  32. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imag. Sci., vol. 2, no. 1, pp. 183–202, 2009.
  33. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE trans. image process., vol. 26, no. 7, pp. 3142–3155, 2017.
Citations (3)

Summary

We haven't generated a summary for this paper yet.