Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Regularized Compound Gaussian Network for Solving Linear Inverse Problems (2311.17248v3)

Published 28 Nov 2023 in eess.SP, cs.AI, cs.NA, and math.NA

Abstract: Incorporating prior information into inverse problems, e.g. via maximum-a-posteriori estimation, is an important technique for facilitating robust inverse problem solutions. In this paper, we devise two novel approaches for linear inverse problems that permit problem-specific statistical prior selections within the compound Gaussian (CG) class of distributions. The CG class subsumes many commonly used priors in signal and image reconstruction methods including those of sparsity-based approaches. The first method developed is an iterative algorithm, called generalized compound Gaussian least squares (G-CG-LS), that minimizes a regularized least squares objective function where the regularization enforces a CG prior. G-CG-LS is then unrolled, or unfolded, to furnish our second method, which is a novel deep regularized (DR) neural network, called DR-CG-Net, that learns the prior information. A detailed computational theory on convergence properties of G-CG-LS and thorough numerical experiments for DR-CG-Net are provided. Due to the comprehensive nature of the CG prior, these experiments show that DR-CG-Net outperforms competitive prior art methods in tomographic imaging and compressive sensing, especially in challenging low-training scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. I. Daubechies, M. Defrise, and C. De Mol, “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint,” Commun. Pure Appl. Math, vol. 57, no. 11, pp. 1413–1457, Nov 2004.
  2. A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, Jan 2009.
  3. S. Ji, Y. Xue, and L. Carin, “Bayesian Compressive Sensing,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2346–2356, 2008.
  4. D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, May 2009.
  5. M. J. Wainwright and E. P. Simoncelli, “Scale Mixtures of Gaussians and the Statistics of Natural Images.” in Advances in Neural Information Processing Systems, vol. 12, 1999, pp. 855–861.
  6. M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random Cascades on Wavelet Trees and Their Use in Analyzing and Modeling Natural Images,” Applied and Computational Harmonic Analysis, vol. 11, no. 1, pp. 89–123, 2001.
  7. C. Lyons, R. G. Raj, and M. Cheney, “A Compound Gaussian Least Squares Algorithm and Unrolled Network for Linear Inverse Problems,” IEEE Transactions on Signal Processing, vol. 71, pp. 4303–4316, 2023.
  8. Z. Chance, R. G. Raj, and D. J. Love, “Information-theoretic structure of multistatic radar imaging,” in IEEE RadarCon (RADAR), 2011, pp. 853–858.
  9. Z. Idriss, R. G. Raj, and R. M. Narayanan, “Waveform Optimization for Multistatic Radar Imaging Using Mutual Information,” IEEE Transactions on Aerospace and Electronics Systems, vol. 57, no. 4, pp. 2410–2425, Aug 2021.
  10. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “ReconNet: Non-Iterative Reconstruction of Images From Compressively Sensed Measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 449–458.
  11. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
  12. J. He, Y. Wang, and J. Ma, “Radon Inversion via Deep Learning,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2076–2087, 2020.
  13. A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed Sensing using Generative Models,” in Proceedings of the International Conference on Machine Learning, vol. 70, 2017, pp. 537–546.
  14. D. Liang, J. Cheng, Z. Ke, and L. Ying, “Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 141–151, 2020.
  15. G. Wang, J. C. Ye, and B. De Man, “Deep learning for tomographic image reconstruction,” Nature Machine Intelligence, vol. 2, no. 12, pp. 737–748, Dec 2020.
  16. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on Machine Learning, 2010, pp. 399–406.
  17. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm Unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Processing Magazine, vol. 38, no. 2, pp. 18–44, Mar 2021.
  18. J. Song, B. Chen, and J. Zhang, “Memory-Augmented Deep Unfolding Network for Compressive Sensing,” in Proceedings of the 29th ACM International Conference on Multimedia, Oct 2021, pp. 4249–4258.
  19. J. Zhang and B. Ghanem, “ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1828–1837.
  20. J. Xiang, Y. Dong, and Y. Yang, “FISTA-Net: Learning a Fast Iterative Shrinkage Thresholding Network for Inverse Problems in Imaging,” IEEE Transactions on Medical Imaging, vol. 40, no. 5, pp. 1329–1339, May 2021.
  21. T. Meinhardt, M. Moller, C. Hazirbas, and D. Cremers, “Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1781–1790.
  22. Y. Zhang, H. Chen, W. Xia, Y. Chen, B. Liu, Y. Liu, H. Sun, and J. Zhou, “LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed Sensing CT,” IEEE Transactions on Radiation and Plasma Medical Sciences, 2022.
  23. Y. Su and Q. Lian, “iPiano-Net: Nonconvex optimization inspired multi-scale reconstruction network for compressed sensing,” Signal Processing: Image Communication, vol. 89, p. 115989, 2020.
  24. J. Adler and O. Öktem, “Learned Primal-Dual Reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1322–1332, 2018.
  25. C. Lyons, R. G. Raj, and M. Cheney, “CG-Net: A Compound Gaussian Prior Based Unrolled Imaging Network,” in 2022 IEEE Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2022, pp. 623–629.
  26. C. Lyons, R. G. Raj, and M. Cheney, “A Deep Compound Gaussian Regularized Unfoled Imaging Network,” in 2022 56th Asilomar Conference on Signals, Systems, and Computers, 2022, pp. 940–947.
  27. M. Bertero, C. De Mol, and E. R. Pike, “Linear inverse problems with discrete data: II. Stability and regularisation,” Inverse Problems, vol. 4, no. 3, pp. 573–594, Aug 1988.
  28. R. G. Raj, “A hierarchical Bayesian-MAP approach to inverse problems in imaging,” Inverse Problems, vol. 32, no. 7, p. 075003, Jul 2016.
  29. J. McKay, R. G. Raj, and V. Monga, “Fast stochastic hierarchical Bayesian map for tomographic imaging,” in 51st Asilomar Conference on Signals, Systems, and Computers.   IEEE, Oct 2017, pp. 223–227.
  30. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain,” IEEE Transactions on Image Processing, vol. 12, no. 11, pp. 1338–1351, Nov 2003.
  31. T. Huang, W. Dong, X. Yuan, J. Wu, and G. Shi, “Deep Gaussian Scale Mixture Prior for Spectral Compressive Imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 216–16 225.
  32. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss Functions for Image Restoration With Neural Networks,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 47–57, Mar 2017.
  33. S. J. Wright, “Coordinate descent algorithms,” Mathematical Programming, vol. 151, no. 1, pp. 3–34, Jun 2015.
  34. Y. Xu and W. Yin, “A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion,” SIAM Journal on Iaging Sciences, vol. 6, no. 3, pp. 1758–1789, 2013.
  35. R. Murray, B. Swenson, and S. Kar, “Revisiting Normalized Gradient Descent: Fast Evasion of Saddle Points,” IEEE Transactions on Automatic Control, vol. 64, no. 11, pp. 4818–4824, 2019.
  36. D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” International Conference on Learning Representations, 2015.
  37. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving et al., “TensorFlow: A System for Large-Scale Machine Learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation, vol. 16, 2016, pp. 265–283.
  38. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the 13th International Conference on Artificial Intelligence and Statistics.   JMLR Workshop and Conference Proceedings, 2010, pp. 249–256.
  39. J. Song, B. Chen, and J. Zhang, “Deep Memory-Augmented Proximal Unrolling Network for Compressive Sensing,” International Journal of Computer Vision, vol. 131, no. 6, pp. 1477–1496, Jun 2023.
  40. A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” University of Toronto, Tech. Rep., 2009.
  41. L. Fei-Fei, R. Fergus, and P. Perona, “Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories,” in Conference on Computer Vision and Pattern Recognition Workshop, 2004, pp. 178–178.
  42. J. Leuschner, M. Schmidt, D. O. Baguer, and P. Maass, “LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction,” Scientific Data, vol. 8, no. 1, p. 109, 2021.
  43. Y. Nesterov, “A method of solving a convex programming problem with convergence rate O⁢(1/k2)𝑂1superscript𝑘2O(1/k^{2})italic_O ( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ),” in Doklady Akademii Nauk, vol. 269, no. 3.   Russian Academy of Sciences, 1983, pp. 543–547.
  44. W. Cheney and A. A. Goldstein, “Proximity Maps For Convex Sets,” Proceedings of the American Mathematical Society, vol. 10, no. 3, pp. 448–450, 1959.
  45. R. T. Rockafellar, “Monotone Operators and the Proximal Point Algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
Citations (2)

Summary

We haven't generated a summary for this paper yet.