Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On A Class of Greedy Sparse Recovery Algorithms -- A High Dimensional Approach (2402.15944v1)

Published 25 Feb 2024 in cs.IT, eess.SP, and math.IT

Abstract: Sparse signal recovery deals with finding the sparest solution of an under-determined linear system $x = Qs$. In this paper, we propose a novel greedy approach to addressing the challenges from such a problem. Such an approach is based on a characterization of solutions to the system, which allows us to work on the sparse recovery in the $s$-space directly with a given measure. With $l_2$-based measure, two OMP-type algorithms are proposed, which significantly outperform the classical OMP algorithm in terms of recovery accuracy while maintaining comparable computational complexity. An $l_1$-based algorithm, denoted as $\text{Alg}_{GBP}$ (greedy basis pursuit) algorithm, is derived. Such an algorithm significantly outperforms the classical BP algorithm. A CoSaMP-type algorithm is also proposed to further enhance the performance of the two proposed OMP-type algorithms. The superior performance of our proposed algorithms is demonstrated through extensive numerical simulations using synthetic data as well as video signals, highlighting their potential for various applications in compressed sensing and signal processing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. V. Shikhman, D. Müller, V. Shikhman, and D. Müller, “Sparse recovery,” Mathematical Foundations of Big Data Analytics, pp. 131–148, 2021.
  2. S. Osher, “Sparse recovery for scientific data,” tech. rep., Univ. of California, Los Angeles, CA (United States), 2019.
  3. L. Stanković, E. Sejdić, S. Stanković, M. Daković, and I. Orović, “A tutorial on sparse signal reconstruction and its applications in signal processing,” Circuits, Systems, and Signal Processing, vol. 38, pp. 1206–1263, 2019.
  4. S. Li, M. B. Wakin, and G. Tang, “Atomic norm denoising for complex exponentials with unknown waveform modulations,” IEEE Transactions on Information Theory, vol. 66, no. 6, pp. 3893–3913, 2019.
  5. M. E. Davies and Y. C. Eldar, “Rank awareness in joint sparse recovery,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1135–1146, 2012.
  6. N. Durgin, R. Grotheer, C. Huang, S. Li, A. Ma, D. Needell, and J. Qin, “Jointly sparse signal recovery with prior info,” in 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 645–649, IEEE, 2019.
  7. Y. Jin, Y.-H. Kim, and B. D. Rao, “Limits on support recovery of sparse signals via multiple-access communication techniques,” IEEE Transactions on Information Theory, vol. 57, no. 12, pp. 7877–7892, 2011.
  8. X. Zhang, H. Zhang, and Y. C. Eldar, “Near-field sparse channel representation and estimation in 6G wireless communications,” IEEE Transactions on Communications, vol. 72, no. 1, pp. 450–464, 2023.
  9. S. Li, D. Gaydos, P. Nayeri, and M. B. Wakin, “Adaptive interference cancellation using atomic norm minimization and denoising,” IEEE Antennas and Wireless Propagation Letters, vol. 19, no. 12, pp. 2349–2353, 2020.
  10. Y. Huang, J. L. Beck, S. Wu, and H. Li, “Bayesian compressive sensing for approximately sparse signals and application to structural health monitoring signals for data loss recovery,” Probabilistic Engineering Mechanics, vol. 46, pp. 62–79, 2016.
  11. S. Li, D. Yang, G. Tang, and M. B. Wakin, “Atomic norm minimization for modal analysis from random and compressed samples,” IEEE Transactions on Signal Processing, vol. 66, no. 7, pp. 1817–1831, 2018.
  12. Z. Tang, Y. Bao, and H. Li, “Group sparsity-aware convolutional neural network for continuous missing data recovery of structural health monitoring,” Structural Health Monitoring, vol. 20, no. 4, pp. 1738–1759, 2021.
  13. M. Golbabaee and P. Vandergheynst, “Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2741–2744, IEEE, 2012.
  14. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, pp. 19–34, Springer, 2016.
  15. N. Durgin, R. Grotheer, C. Huang, S. Li, A. Ma, D. Needell, and J. Qin, “Fast hyperspectral diffuse optical imaging method with joint sparsity,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4758–4761, IEEE, 2019.
  16. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  17. MIT press, 2016.
  18. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  19. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008.
  20. M. F. Duarte and Y. C. Eldar, “Structured compressed sensing: From theory to applications,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4053–4085, 2011.
  21. Z. Zhu, G. Li, J. Ding, Q. Li, and X. He, “On collaborative compressive sensing systems: The framework, design, and algorithm,” SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 1717–1758, 2018.
  22. R. Vershynin, Y. C. Eldar, and G. Kutyniok, “Introduction to the non-asymptotic analysis of random matrices,” in Compressed Sensing: Theory and Applications (Y. C. Eldar and G. Kutyniok, eds.), UK: Cambridge University Press, 2012.
  23. S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Birkhäuser New York, NY, 2013.
  24. J. Gui, Z. Sun, S. Ji, D. Tao, and T. Tan, “Feature selection based on structured sparsity: A comprehensive study,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 7, pp. 1490–1507, 2016.
  25. S. Xu, Z. Bu, P. Chaudhari, and I. J. Barnett, “Sparse neural additive model: Interpretable deep learning with feature selection via group sparsity,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 343–359, Springer, 2023.
  26. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, 2012.
  27. B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Pensky, “Sparse convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 806–814, 2015.
  28. J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear models via coordinate descent,” Journal of Statistical Software, vol. 33, no. 1, p. 1, 2010.
  29. Z. Liu, Z. Lai, W. Ou, K. Zhang, and R. Zheng, “Structured optimal graph based sparse feature extraction for semi-supervised learning,” Signal Processing, vol. 170, p. 107456, 2020.
  30. D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization,” Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003.
  31. E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
  32. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
  33. S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415, 1993.
  34. J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231–2242, 2004.
  35. T. Blumensath and M. E. Davies, “Gradient pursuits,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2370–2382, 2008.
  36. D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, “Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1094–1121, 2012.
  37. D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, 2009.
  38. T. Blumensath and M. E. Davies, “Stagewise weak gradient pursuits,” IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4333–4346, 2009.
  39. T. Blumensath, M. E. Davies, and G. Rilling, “Greedy algorithms for compressed sensing,” in Compressed Sensing: Theory and Applications (Y. C. Eldar and G. Kutyniok, eds.), UK: Cambridge University Press, 2012.
  40. Springer, 2003.
  41. T. Strohmer and R. W. Heath Jr, “Grassmannian frames with applications to coding and communication,” Applied and Computational Harmonic Analysis, vol. 14, no. 3, pp. 257–275, 2003.
  42. J. A. Tropp, I. S. Dhillon, R. W. Heath, and T. Strohmer, “Designing structured tight frames via an alternating projection method,” IEEE Transactions on Information Theory, vol. 51, no. 1, pp. 188–209, 2005.
  43. Y. C. Eldar and G. D. Forney, “Optimal tight frames and quantum measurement,” IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 599–610, 2002.
  44. G. Li, Z. Zhu, D. Yang, L. Chang, and H. Bai, “On projection matrix optimization for compressive sensing systems,” IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 2887–2898, 2013.
  45. E. V. Tsiligianni, L. P. Kondi, and A. K. Katsaggelos, “Construction of incoherent unit norm tight frames with application to compressed sensing,” IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2319–2330, 2014.
  46. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.

Summary

We haven't generated a summary for this paper yet.