Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Preconditioned Plug-and-Play Approach for Compressed Sensing MRI Reconstruction (2405.03854v2)

Published 6 May 2024 in eess.IV and math.OC

Abstract: Model-based methods play a key role in the reconstruction of compressed sensing (CS) MRI. Finding an effective prior to describe the statistical distribution of the image family of interest is crucial for model-based methods. Plug-and-play (PnP) is a general framework that uses denoising algorithms as the prior or regularizer. Recent work showed that PnP methods with denoisers based on pretrained convolutional neural networks outperform other classical regularizers in CS MRI reconstruction. However, the numerical solvers for PnP can be slow for CS MRI reconstruction. This paper proposes a preconditioned PnP (P2nP) method to accelerate the convergence speed. Moreover, we provide proofs of the fixed-point convergence of the P2nP iterates. Numerical experiments on CS MRI reconstruction with non-Cartesian sampling trajectories illustrate the effectiveness and efficiency of the P2nP approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, “SENSE: sensitivity encoding for fast MRI,” Magnetic Resonance in Medicine, vol. 42, no. 5, pp. 952–962, 1999.
  2. M. A. Griswold, P. M. Jakob, R. M. Heidemann, M. Nittka, V. Jellus, J. Wang, B. Kiefer, and A. Haase, “Generalized autocalibrating partially parallel acquisitions (GRAPPA),” Magnetic Resonance in Medicine, vol. 47, no. 6, pp. 1202–1210, 2002.
  3. A. Deshmane, V. Gulani, M. A. Griswold, and N. Seiberlich, “Parallel MR imaging,” Journal of Magnetic Resonance Imaging, vol. 36, no. 1, pp. 55–72, 2012.
  4. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007.
  5. M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 72–82, 2008.
  6. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259–268, 1992.
  7. T. Hong, L. Hernandez-Garcia, and J. A. Fessler, “A complex quasi-Newton proximal method for image reconstruction in compressed sensing MRI,” IEEE Transactions on Computational Imaging, vol. 10, pp. 372 –384, Feb. 2024.
  8. M. Guerquin-Kern, M. Haberlin, K. P. Pruessmann, and M. Unser, “A fast wavelet-based reconstruction method for magnetic resonance imaging,” IEEE Transactions on Medical Imaging, vol. 30, no. 9, pp. 1649–1660, 2011.
  9. M. V. Zibetti, E. S. Helou, R. R. Regatte, and G. T. Herman, “Monotone FISTA with variable acceleration for compressed sensing magnetic resonance imaging,” IEEE Transactions on Computational Imaging, vol. 5, no. 1, pp. 109–119, 2018.
  10. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.
  11. S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041, 2011.
  12. W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, “Compressive sensing via nonlocal low-rank regularization,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3618–3632, 2014.
  13. J. A. Fessler, “Model-based image reconstruction for MRI,” IEEE Signal Processing Magazine, vol. 27, no. 4, pp. 81–9, Jul. 2010.
  14. ——, “Optimization methods for magnetic resonance image reconstruction: Key models and optimization algorithms,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 33–40, 2020.
  15. S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in IEEE 13th International Symposium on Biomedical Imaging (ISBI).   IEEE, 2016, pp. 514–517.
  16. H. K. Aggarwal, M. P. Mani, and M. Jacob, “MoDL: Model-based deep learning architecture for inverse problems,” IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 394–405, 2018.
  17. D. Gilton, G. Ongie, and R. Willett, “Deep equilibrium architectures for inverse problems in imaging,” IEEE Transactions on Computational Imaging, vol. 7, pp. 1123–1133, 2021.
  18. T. Chen, X. Chen, W. Chen, H. Heaton, J. Liu, Z. Wang, and W. Yin, “Learning to optimize: A primer and a benchmark,” Journal of Machine Learning Research, vol. 23, no. 189, pp. 1–59, 2022.
  19. Y. Song, L. Shen, L. Xing, and S. Ermon, “Solving inverse problems in medical imaging with score-based generative models,” in International Conference on Learning Representations, 2021.
  20. H. Chung and J. C. Ye, “Score-based diffusion models for accelerated mri,” Medical Image Analysis, vol. 80, p. 102479, 2022.
  21. S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in IEEE Global Conference on Signal and Information Processing.   IEEE, 2013, pp. 945–948.
  22. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
  23. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
  24. S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 408–423, 2016.
  25. S. Ono, “Primal-dual plug-and-play image restoration,” IEEE Signal Processing Letters, vol. 24, no. 8, pp. 1108–1112, 2017.
  26. T. Meinhardt, M. Moeller, C. Hazirbas, and D. Cremers, “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems,” in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, Oct. 2017, pp. 1799–1808.
  27. G. T. Buzzard, S. H. Chan, S. Sreehari, and C. A. Bouman, “Plug-and-Play unplugged: Optimization free reconstruction using consensus equilibrium,” SIAM Journal on Imaging Sciences, vol. 11, no. 3, pp. 2001–2020, 2018.
  28. W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, p. 2305–2318, Oct. 2019.
  29. K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6360–6376, 2021.
  30. R. Ahmad, C. A. Bouman, G. T. Buzzard, S. Chan, S. Liu, E. T. Reehorst, and P. Schniter, “Plug-and-play methods for magnetic resonance imaging: Using denoisers for image recovery,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 105–116, 2020.
  31. N. Parikh, S. Boyd et al., “Proximal algorithms,” Foundations and Trends® in Optimization, vol. 1, no. 3, pp. 127–239, 2014.
  32. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
  33. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play admm for image restoration: Fixed-point convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2017.
  34. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
  35. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3929–3938.
  36. Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (RED),” SIAM Journal on Imaging Sciences, vol. 10, no. 4, pp. 1804–1844, 2017.
  37. T. Hong, Y. Romano, and M. Elad, “Acceleration of RED via vector extrapolation,” Journal of Visual Communication and Image Representation, p. 102575, 2019.
  38. E. T. Reehorst and P. Schniter, “Regularization by denoising: Clarifications and new interpretations,” IEEE Transactions on Computational Imaging, vol. 5, no. 1, pp. 52–67, 2019.
  39. H. Y. Tan, S. Mukherjee, J. Tang, and C.-B. Schönlieb, “Provably convergent plug-and-play quasi-Newton methods,” SIAM Journal on Imaging Sciences, vol. 17, no. 2, pp. 785–819, 2024.
  40. M. L. Pendu and C. Guillemot, “Preconditioned plug-and-play ADMM with locally adjustable denoiser for image restoration,” SIAM Journal on Imaging Sciences, vol. 16, no. 1, pp. 393–422, 2023.
  41. A. M. Teodoro, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “A convergent image fusion algorithm using scene-adapted gaussian-mixture-based denoising,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 451–463, Jan. 2019.
  42. R. G. Gavaskar, C. D. Athalye, and K. N. Chaudhury, “On plug-and-play regularization using linear denoisers,” IEEE Transactions on Image Processing, vol. 30, pp. 4802–4813, 2021.
  43. Y. Sun, B. Wohlberg, and U. S. Kamilov, “An online plug-and-play algorithm for regularized image reconstruction,” IEEE Transactions on Computational Imaging, vol. 5, no. 3, pp. 395–408, Sep. 2019.
  44. E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, “Plug-and-play methods provably converge with properly trained denoisers,” in International Conference on Machine Learning.   PMLR, 2019, pp. 5546–5557.
  45. Y. Sun, Z. Wu, X. Xu, B. Wohlberg, and U. S. Kamilov, “Scalable plug-and-play ADMM with convergence guarantees,” IEEE Transactions on Computational Imaging, vol. 7, pp. 849–863, 2021.
  46. M. Terris, A. Repetti, J.-C. Pesquet, and Y. Wiaux, “Building firmly nonexpansive convolutional neural networks,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2020, pp. 8658–8662.
  47. X. Xu, Y. Sun, J. Liu, B. Wohlberg, and U. S. Kamilov, “Provable convergence of plug-and-play priors with MMSE denoisers,” IEEE Signal Processing Letters, vol. 27, pp. 1280–1284, 2020.
  48. J. Liu, S. Asif, B. Wohlberg, and U. Kamilov, “Recovery analysis for plug-and-play priors using the restricted eigenvalue condition,” Advances in Neural Information Processing Systems, vol. 34, pp. 5921–5933, 2021.
  49. R. Cohen, M. Elad, and P. Milanfar, “Regularization by denoising via fixed-point projection (RED-PRO),” SIAM Journal on Imaging Sciences, vol. 14, no. 3, pp. 1374–1406, 2021.
  50. U. S. Kamilov, C. A. Bouman, G. T. Buzzard, and B. Wohlberg, “Plug-and-play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications,” IEEE Signal Processing Magazine, vol. 40, no. 1, pp. 85–97, 2023.
  51. Z. Aminifard and S. Babaie-Kafaki, “An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions,” Optimization Methods and Software, vol. 38, no. 3, pp. 529–542, 2023.
  52. M. J. Grote and T. Huckle, “Parallel preconditioning with sparse approximate inverses,” SIAM Journal on Scientific Computing, vol. 18, no. 3, pp. 838–853, 1997.
  53. N. I. Gould and J. A. Scott, “Sparse approximate-inverse preconditioners using norm-minimization techniques,” SIAM Journal on Scientific Computing, vol. 19, no. 2, pp. 605–625, 1998.
  54. O. G. Johnson, C. A. Micchelli, and G. Paul, “Polynomial preconditioners for conjugate gradient calculations,” SIAM Journal on Numerical Analysis, vol. 20, no. 2, pp. 362–376, 1983.
  55. M. Zulfiquar Ali Bhotto, M. O. Ahmad, and M. Swamy, “An improved fast iterative shrinkage thresholding algorithm for image deblurring,” SIAM Journal on Imaging Sciences, vol. 8, no. 3, pp. 1640–1657, 2015.
  56. S. S. Iyer, F. Ong, X. Cao, C. Liao, L. Daniel, J. I. Tamir, and K. Setsompop, “Polynomial preconditioners for regularized linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 17, no. 1, pp. 116–146, 2024.
  57. M. Osborne and L. Sun, “A new approach to symmetric rank-one updating,” IMA Journal of Numerical Analysis, vol. 19, no. 4, pp. 497–507, 1999.
  58. F. Curtis, “A self-correcting variable-metric algorithm for stochastic optimization,” in International Conference on Machine Learning.   PMLR, 2016, pp. 632–641.
  59. X. Wang, X. Wang, and Y.-X. Yuan, “Stochastic proximal quasi-Newton methods for non-convex composite optimization,” Optimization Methods and Software, vol. 34, no. 5, pp. 922–948, 2019.
  60. X. Xu, J. Liu, Y. Sun, B. Wohlberg, and U. S. Kamilov, “Boosting the performance of plug-and-play priors via denoiser scaling,” in 2020 54th Asilomar Conference on Signals, Systems, and Computers.   IEEE, 2020, pp. 1305–1312.
  61. S. Herbreteau, E. Moebel, and C. Kervrann, “Normalization-equivariant neural networks with application to image denoising,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  62. J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley, A. Defazio, R. Stern, P. Johnson, M. Bruno et al., “fastMRI: An open dataset and benchmarks for accelerated MRI,” arXiv preprint arXiv:1811.08839, 2018.
  63. M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig, “ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA,” Magnetic Resonance in Medicine, vol. 71, no. 3, pp. 990–1001, 2014.
  64. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  65. L. Métivier, R. Brossier, J. Virieux, and S. Operto, “Full waveform inversion and the truncated Newton method,” SIAM Journal on Scientific Computing, vol. 35, no. 2, pp. B401–B437, 2013.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com