Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bisparse Blind Deconvolution through Hierarchical Sparse Recovery (2210.11993v3)

Published 20 Oct 2022 in cs.IT, cs.NA, math.IT, and math.NA

Abstract: The hierarchical sparsity framework, and in particular the HiHTP algorithm, has been successfully applied to many relevant communication engineering problems recently, particularly when the signal space is hierarchically structured. In this paper, the applicability of the HiHTP algorithm for solving the bi-sparse blind deconvolution problem is studied. The bi-sparse blind deconvolution setting here consists of recovering $h$ and $b$ from the knowledge of $h*(Qb)$, where $Q$ is some linear operator, and both $b$ and $h$ are both assumed to be sparse. The approach rests upon lifting the problem to a linear one, and then applying HiHTP, through the \emph{hierarchical sparsity framework}. %In particular, the efficient HiHTP algorithm is proposed for performing the recovery. Then, for a Gaussian draw of the random matrix $Q$, it is theoretically shown that an $s$-sparse $h \in \mathbb{K}\mu$ and $\sigma$-sparse $b \in \mathbb{K}n$ with high probability can be recovered when $\mu \succcurlyeq s\log(s)2\log(\mu)\log(\mu n) + s\sigma \log(n)$.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, “Compressed channel sensing: A new approach to estimating sparse multipath channels,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1058–1076, 2010.
  2. M. Kech and F. Krahmer, “Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems,” SIAM Journal on Applied Algebra and Geometry, vol. 1, pp. 20–37, 2017.
  3. Y. Li, K. Lee, and Y. Bresler, “Identifiability and stability in blind deconvolution under minimal assumptions,” IEEE Transactions on Information Theory, vol. 63, pp. 4619–4633, 2017.
  4. P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” IEEE Trans. Sign. Proc., vol. 63, pp. 4814–4826, 2015.
  5. K. Lee, Y. Li, M. Junge, and Y. Bresler, “Stability in blind deconvolution of sparse signals and reconstruction by alternating minimization,” in International Conference on Sampling Theory and Applications (SampTA), 2015, pp. 158–162.
  6. ——, “Blind recovery of sparse signals from subsampled convolution,” IEEE Transactions on Information Theory, vol. 63, pp. 802–821, 2016.
  7. S. Foucart, “Hard thresholding pursuit: an algorithm for compressive sensing,” SIAM Journal on Numerical Analysis, vol. 49, pp. 2543–2563, 2011.
  8. D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Applied and computational harmonic analysis, vol. 26, pp. 301–321, 2009.
  9. X. Li, S. Ling, T. Strohmer, and K. Wei, “Rapid, robust, and reliable blind deconvolution via nonconvex optimization,” Applied and computational harmonic analysis, vol. 47, pp. 893–934, 2019.
  10. A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using convex programming,” IEEE Transactions on Information Theory, vol. 60, pp. 1711–1732, 2013.
  11. S. Ling and T. Strohmer, “Blind deconvolution meets blind demixing: Algorithms and performance bounds,” IEEE Transactions on Information Theory, vol. 63, pp. 4497–4520, 2017.
  12. P. Jung, F. Krahmer, and D. Stöger, “Blind demixing and deconvolution at near-optimal rate,” IEEE Transactions on Information Theory, vol. 64, pp. 704–727, 2018.
  13. S. Ling and T. Strohmer, “Self-calibration and biconvex compressive sensing,” Inverse Problems, vol. 31, p. 115002, sep 2015. [Online]. Available: https://doi.org/10.1088%2F0266-5611%2F31%2F11%2F115002
  14. A. Flinth, “Sparse blind deconvolution and demixing through ℓ1,2subscriptℓ12\ell_{1,2}roman_ℓ start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT-minimization.” Advances in Computational Mathematics, vol. 44, 2018.
  15. Y. Chen, J. Fan, B. Wang, and Y. Yan, “Convex and nonconvex optimization are both minimax-optimal for noisy blind deconvolution under random designs,” Journal of the American Statistical Association, pp. 1–11, 2021. [Online]. Available: https://doi.org/10.1080/01621459.2021.1956501
  16. S. Foucart, R. Gribonval, L. Jacques, and H. Rauhut, “Jointly low-rank and bisparse recovery: Questions and partial answers,” Analysis and Applications, vol. 18, pp. 25–48, 2020.
  17. M. Magdon-Ismail, “NP-hardness and inapproximability of sparse PCA,” Information Processing Letters., vol. 126, pp. 35–38, Oct. 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S002001901730100X
  18. S. O. Chan, D. Papailliopoulos, and A. Rubinstein, “On the approximability of sparse PCA,” in Proceedings of Machine Learning Research, vol. 49, Jun. 2016, pp. 623–646. [Online]. Available: http://proceedings.mlr.press/v49/chan16.html
  19. M. Brennan and G. Bresler, “Optimal average-case reductions to sparse pca: From weak assumptions to strong hardness,” in Conference on Learning Theory.   PMLR, 2019, pp. 469–470.
  20. H. Eisenmann, F. Krahmer, M. Pfeffer, and A. Uschmajew, “Riemannian thresholding methods for row-sparse and low-rank matrix recovery,” Numerical Algorithms, vol. 93, no. 2, pp. 669–693, 2023.
  21. K. Lee, Y. Wu, and Y. Bresler, “Near-optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization,” IEEE Transactions on Information Theory, vol. 64, pp. 1666–1698, 2017.
  22. S. Bahmani and J. Romberg, “Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements,” Information and Inference: A Journal of the IMA, vol. 5, pp. 331–351, 2016.
  23. A. Ahmed and L. Demanet, “Leveraging diversity and sparsity in blind deconvolution,” IEEE Transactions on Information Theory, vol. 64, no. 6, pp. 3975–4000, 2018.
  24. P. Sprechmann, I. Ramirez, G. Sapiro, and Y. Eldar, “Collaborative hierarchical sparse modeling,” in 2010 44th Annual Conference on Information Sciences and Systems (CISS), 2010, pp. 1–6.
  25. J. Friedman, T. Hastie, and R. Tibshirani, “A note on the group lasso and a sparse group lasso,” Preprint, 2010, arXiv: 1001.0736.
  26. P. Sprechmann, I. Ramirez, G. Sapiro, and Y. C. Eldar, “C-HiLasso: A collaborative hierarchical sparse modeling framework,” IEEE Transactions on Signal Processing, vol. 59, pp. 4183–4198, 2011.
  27. N. Simon, J. Friedman, T. Hastie, and R. Tibshirani, “A sparse-group Lasso,” Journal of Computational and Graphical Statistics, vol. 22, pp. 231–245, 2013.
  28. J. Eisert, A. Flinth, B. Groß, I. Roth, and G. Wunder, “Hierarchical compressed sensing,” in Compressed Sensing in Information Processing.   Springer, 2022, pp. 1–35.
  29. I. Roth, M. Kliesch, A. Flinth, G. Wunder, and J. Eisert, “Reliable recovery of hierarchically sparse signals for gaussian and kronecker product measurements,” IEEE Transactions on Signal Processing, vol. 68, pp. 4002–4016, 2020.
  30. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Transactions on Information theory, vol. 56, no. 4, pp. 1982–2001, 2010.
  31. G. Wunder, I. Roth, R. Fritschek, B. Groß, and J. Eisert, “Secure massive IoT using hierarchical fast blind deconvolution,” in 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW).   IEEE, 2018, pp. 119–124.
  32. G. Wunder, S. Stefanatos, A. Flinth, I. Roth, and G. Caire, “Low-overhead hierarchically-sparse channel estimation for multiuser wideband massive mimo,” IEEE Transactions on Wireless Communications, vol. 18, pp. 2186–2199, April 2019.
  33. A. Flinth, B. Groß, I. Roth, J. Eisert, and G. Wunder, “Hierarchical isometry properties of hierarchical measurements,” Appl. Harm. Comp. Anal, vol. 58, pp. 27–49, 2021.
  34. E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. 346, no. 9, pp. 589–592, 2008. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1631073X08000964
  35. F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Communications on Pure and Applied Mathematics, vol. 67, pp. 1877–1904, 2014. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.21504
  36. M. Rudelson and R. Vershynin, “On sparse reconstruction from Fourier and Gaussian measurements,” Communications on Pure and Applied Mathematics, vol. 61, no. 8, pp. 1025–1045, 2008.
  37. H. Rauhut, J. Romberg, and J. A. Tropp, “Restricted isometries for partial random circulant matrices,” Applied and Computational Harmonic Analysis, vol. 32, pp. 242–254, 2012. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1063520311000649
  38. S. Dirksen, “Tail bounds via generic chaining,” Electronic Journal of Probability, vol. 20, pp. 1–29, 2015.
  39. R. Okuta, Y. Unno, D. Nishino, S. Hido, and C. Loomis, “Cupy: A numpy-compatible library for nvidia gpu calculations,” in Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS), 2017. [Online]. Available: http://learningsys.org/nips17/assets/papers/paper_16.pdf
  40. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com