Generative Plug and Play: Posterior Sampling for Inverse Problems (2306.07233v1)
Abstract: Over the past decade, Plug-and-Play (PnP) has become a popular method for reconstructing images using a modular framework consisting of a forward and prior model. The great strength of PnP is that an image denoiser can be used as a prior model while the forward model can be implemented using more traditional physics-based approaches. However, a limitation of PnP is that it reconstructs only a single deterministic image. In this paper, we introduce Generative Plug-and-Play (GPnP), a generalization of PnP to sample from the posterior distribution. As with PnP, GPnP has a modular framework using a physics-based forward model and an image denoising prior model. However, in GPnP these models are extended to become proximal generators, which sample from associated distributions. GPnP applies these proximal generators in alternation to produce samples from the posterior. We present experimental simulations using the well-known BM3D denoiser. Our results demonstrate that the GPnP method is robust, easy to implement, and produces intuitively reasonable samples from the posterior for sparse interpolation and tomographic reconstruction. Code to accompany this paper is available at https://github.com/gbuzzard/generative-pnp-allerton .
- S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2013. IEEE, 2013, pp. 945–948.
- S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 408–423, Dec 2016.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
- C. J. Pellizzari, T. J. Bate, K. P. Donnelly, G. T. Buzzard, C. A. Bouman, and M. F. Spencer, “Coherent plug-and-play artifact removal: Physics-based deep learning for imaging through aberrations,” Optics and Lasers in Engineering, vol. 164, 2023.
- S. Majee, T. Balke, C. A. J. Kemp, G. T. Buzzard, and C. A. Bouman, “Multi-slice fusion for sparse-view and limited-angle 4d ct reconstruction,” IEEE Transactions on Computational Imaging, vol. 7, 2021.
- G. T. Buzzard and C. A. B. Stanley H. Chan, Suhas Sreehari, “Plug-and-play unplugged: Optimization-free reconstruction using consensus equilibrium,” SIAM Journal on Imaging Sciences, vol. 11, no. 3, pp. 2001–2020, 2018.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NeurIPS, vol. 27, 2014.
- D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in International Conference on Learning Representations (ICLR), 2014.
- M. Mirza and S. Osindero, “Conditional generative adversarial nets,” in International Conference on Learning Representations (ICLR), 2014. [Online]. Available: http://arxiv.org/abs/1411.1784
- M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International Conference on Learning Representations (ICLR), 2017.
- Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” in Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 11 895–11 907.
- P. Vincent, “A connection between score matching and denoising autoencoders,” Neural Computation, vol. 23, no. 7, p. 1661–1674, 2011.
- Y. Song and S. Ermon, “Improved techniques for training score-based generative models,” in Advances in Neural Information Processing Systems (NeurIPS), 2020.
- U. Grenander and M. Miller, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of images,” Journal of the Royal Statistical Society B, vol. 56, no. 4, pp. 549–581, 1994.
- Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Robust compressed sensing MRI with deep generative priors,” in International Conference on Learning Representations (ICLR), 2021.
- B. T. Feng, J. Smith, M. Rubinstein, H. Chang, K. L. Bouman, and W. T. Freeman, “Score-based diffusion models as principled priors for inverse imaging,” 2017.
- A. Jalal, M. Arvinte, G. Daras, E. Price, A. G. Dimakis, and J. I. Tamir, “Robust compressed sensing MRI with deep generative priors,” in Advances in Neural Information Processing Systems (NeurIPS), 2021.
- Y. Song, L. Shen, L. Xing, and S. Ermon, “Solving inverse problems in medical imaging with score-based generative models,” in International Conference on Learning Representations (ICLR), 2022.
- H. Chung, J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Ye, “Diffusion posterior sampling for general noisy inverse problems,” in International Conference on Learning Representations (ICLR), 2023.
- S. D. Team, “Super-Voxel Model Based Iterative Reconstruction (SVMBIR),” Software library available from https://github.com/cabouman/svmbir, 2020.
- S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-6, no. 6, pp. 721–741, 1984.