Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling (2402.12292v1)

Published 19 Feb 2024 in stat.ML, cs.CV, and cs.LG

Abstract: This paper introduces a Bayesian framework for image inversion by deriving a probabilistic counterpart to the regularization-by-denoising (RED) paradigm. It additionally implements a Monte Carlo algorithm specifically tailored for sampling from the resulting posterior distribution, based on an asymptotically exact data augmentation (AXDA). The proposed algorithm is an approximate instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo step. The proposed method is applied to common imaging tasks such as deblurring, inpainting and super-resolution, demonstrating its efficacy through extensive numerical experiments. These contributions advance Bayesian inference in imaging by leveraging data-driven regularization strategies within a probabilistic framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: nonlinear phenomena, vol. 60, no. 1-4, pp. 259–268, 1992.
  2. W. C. Karl, “Regularization in image restoration and reconstruction,” in Handbook of image and video processing.   Elsevier, 2005, pp. 183–V.
  3. F. Cao, M. Cai, Y. Tan, and J. Zhao, “Image super-resolution via adaptive ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT (0<p<10𝑝10<p<10 < italic_p < 1) regularization and sparse representation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 7, pp. 1550–1561, 2016.
  4. M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process., vol. 20, no. 3, pp. 681–695, 2010.
  5. S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. IEEE Global Conf. Signal Info. Process. (GlobalSIP).   IEEE, 2013, pp. 945–948.
  6. D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process., vol. 4, no. 7, pp. 932–946, 1995.
  7. J. Douglas and H. H. Rachford, “On the numerical solution of heat conduction problems in two and three space variables,” Trans. Am. Math. Soc., vol. 82, no. 2, pp. 421–439, 1956.
  8. A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in Proc. Int. Conf. Computer Vision Pattern Recognition (CVPR), vol. 2.   Ieee, 2005, pp. 60–65.
  9. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
  10. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017.
  11. K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 44, no. 10, pp. 6360–6376, 2021.
  12. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comput. Imag., vol. 3, no. 1, pp. 84–98, 2016.
  13. E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, “Plug-and-play methods provably converge with properly trained denoisers,” in Proc. Int. Conf. Machine Learning (ICML).   PMLR, 2019, pp. 5546–5557.
  14. S. Hurault, A. Leclaire, and N. Papadakis, “Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization,” in Proc. Int. Conf. Machine Learning (ICML).   PMLR, 2022, pp. 9483–9505.
  15. Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (RED),” SIAM J. Imag. Sci., vol. 10, no. 4, pp. 1804–1844, 2017.
  16. E. T. Reehorst and P. Schniter, “Regularization by denoising: Clarifications and new interpretations,” IEEE Trans. Comput. Imag., vol. 5, no. 1, pp. 52–67, 2018.
  17. R. Cohen, M. Elad, and P. Milanfar, “Regularization by denoising via fixed-point projection (RED-PRO),” SIAM J. Imag. Sci., vol. 14, no. 3, pp. 1374–1406, 2021.
  18. X. Cai, M. Pereyra, and J. D. McEwen, “Uncertainty quantification for radio interferometric imaging –I. Proximal MCMC methods,” Monthly Notices of the Royal Astronomical Society, vol. 480, no. 3, pp. 4154–4169, 2018.
  19. M. Holden, M. Pereyra, and K. C. Zygalakis, “Bayesian imaging with data-driven priors encoded by neural networks,” SIAM J. Imag. Sci., vol. 15, no. 2, pp. 892–924, 2022.
  20. Z. Cai, J. Tang, S. Mukherjee, J. Li, C. B. Schönlieb, and X. Zhang, “NF-ULA: Langevin Monte Carlo with normalizing flow prior for imaging inverse problems,” SIAM J. Imag. Sci., 2024.
  21. R. Laumont, V. D. Bortoli, A. Almansa, J. Delon, A. Durmus, and M. Pereyra, “Bayesian imaging using plug & play priors: when Langevin meets Tweedie,” SIAM J. Imag. Sci., vol. 15, no. 2, pp. 701–737, 2022.
  22. M. Vono, N. Dobigeon, and P. Chainais, “Asymptotically exact data augmentation: Models, properties, and algorithms,” J. Comput. Graph. Stat., vol. 30, no. 2, pp. 335–348, 2020.
  23. M. Vono, N. Dobigeon, and P. Chainais, “Split-and-augmented Gibbs sampler – Application to large-scale inference problems,” IEEE Trans. Signal Process., vol. 67, no. 6, pp. 1648–1661, 2019.
  24. B. Efron, “Tweedie’s formula and selection bias,” J. Amer. Stat. Soc., vol. 106, no. 496, pp. 1602–1614, 2011.
  25. A. Durmus and E. Moulines, “High-dimensional Bayesian inference via the unadjusted Langevin algorithm,” Bernoulli, vol. 25, no. 4A, pp. 2854–2882, 2019.
  26. L. J. Rendell, A. M. Johansen, A. Lee, and N. Whiteley, “Global consensus monte carlo,” J. Comput. Graph. Stat., vol. 30, no. 2, pp. 249–259, 2020.
  27. M. Vono, N. Dobigeon, and P. Chainais, “High-dimensional Gaussian sampling: a review and a unifying approach based on a stochastic proximal point algorithm,” SIAM Rev., vol. 64, no. 1, pp. 3–56, 2022.
  28. M. Vono, N. Dobigeon, and P. Chainais, “Sparse Bayesian binary logistic regression using the split-and-augmented Gibbs sampler,” in Proc. IEEE Workshop Mach. Learning for Signal Process. (MLSP), Aalborg, Denmark, Sept. 2018.
  29. M. Vono, N. Dobigeon, and P. Chainais, “Bayesian image restoration under Poisson noise and log-concave prior,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), Brighton, U.K., April 2019.
  30. G. O. Roberts and O. Stramer, “Langevin diffusions and Metropolis-Hastings algorithms,” Methodology and computing in applied probability, vol. 4, pp. 337–357, 2002.
  31. P.-A. Thouvenin, A. Repetti, and P. Chainais, “A distributed Gibbs sampler with hypergraph structure for high-dimensional inverse problems,” J. Comput. Graph. Stat., 2024.
  32. A. Durmus, S. Majewski, and B. Miasojedow, “Analysis of Langevin Monte Carlo via convex optimization,” J. Mach. Learning Research, vol. 20, no. 1, pp. 2666–2711, 2019.
  33. V. Plassier, M. Vono, A. Durmus, and E. Moulines, “DG-LMC: a turn-key and scalable synchronous distributed MCMC algorithm via Langevin Monte Carlo within Gibbs,” in Proc. Int. Conf. Machine Learning (ICML).   PMLR, 2021, pp. 8577–8587.
  34. M. Terris, A. Repetti, J.-C. Pesquet, and Y. Wiaux, “Building firmly nonexpansive convolutional neural networks,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP).   IEEE, 2020, pp. 8658–8662.
  35. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proc. Int. Conf. Computer Vision Pattern Recognition (CVPR), 2019, pp. 4401–4410.
  36. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. Int. Conf. Computer Vision Pattern Recognition (CVPR), 2009, pp. 248–255.
  37. A. Durmus, E. Moulines, and M. Pereyra, “Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau,” SIAM J. Imag. Sci., vol. 11, no. 1, pp. 473–506, 2018.
  38. Y. Zhu, K. Zhang, J. Liang, J. Cao, B. Wen, R. Timofte, and L. V. Gool, “Denoising diffusion models for plug-and-play image restoration,” in Int. Conf. Computer Vision Pattern Recognition Workshops (NTIRE), 2023.
  39. M. D. Fall and É. Barat, “Gibbs sampling methods for Pitman-Yor mixture models,” 2014, Research report. [Online]. Available: https://hal.science/hal-00740770
Citations (4)

Summary

  • The paper proposes a novel Bayesian framework that integrates RED via a Langevin-within-split Gibbs sampler for effective inverse problem solving.
  • It employs an asymptotically exact data augmentation scheme to decouple fidelity and regularization, ensuring robust convergence in image restoration tasks.
  • Empirical results demonstrate competitive performance in deblurring, inpainting, and super-resolution while providing valuable uncertainty quantification.

Exploring the Intersection of Bayesian Inference and RED in Imaging Inverse Problems

In the constantly evolving field of computational imaging, the quest for robust and versatile methodologies for image restoration and reconstruction remains paramount. The recent contribution titled "Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling" offers an intriguing intersection of Bayesian inference principles with the regularization-by-denoising (RED) strategy, providing a comprehensive data-driven approach for tackling various imaging inverse problems.

The Evolution of RED within Bayesian Frameworks

The concept of RED has garnered significant attention due to its ability to leverage advanced denoising engines for the regularization of inverse problems. Traditionally applied in a deterministic optimization context, RED's ability to incorporate complex image priors without explicitly defining them has made it a popular choice for various imaging tasks. The leap to a probabilistic setting involves crafting a Bayesian model that encapsulates the essence of RED by defining an appropriate prior distribution from the RED potential. This novel integration facilitates the application of data-driven regularization within a probabilistic framework, enhancing both theoretical and practical aspects of image restoration.

A Tailored Monte Carlo Algorithm: Langevin-within-Split Gibbs Sampling

The derivation of a specific Monte Carlo algorithm is a testament to the innovative fusion of RED within Bayesian inference. Given the unique characteristics of the RED posterior distribution, standard sampling methods fall short of efficiently exploring the solution space. The proposed Langevin-within-split Gibbs (LwSGS) algorithm, rooted in an asymptotically exact data augmentation (AXDA) scheme, emerges as a sophisticated solution. By decoupling the data fidelity and regularization terms through a split Gibbs sampler framework and integrating a Langevin Monte Carlo step, LwSGS exhibits a remarkable capability to traverse the posterior distribution effectively.

Theoretical Insights and Empirical Validation

The comprehensive theoretical analysis accompanying the proposed method establishes strong convergence guarantees, ensuring the generated samples approximate the target distribution closely. Through extensive numerical experiments across common imaging tasks such as deblurring, inpainting, and super-resolution, the method demonstrates competitive performance against both variational and Monte Carlo benchmarks. The inclusion of a deep network-based denoiser, specifically DRUNet, without further fine-tuning, highlights the method's adaptability and robustness in handling real-world data.

Towards Comprehensive Solution Characterization

One of the distinct advantages of the proposed Bayesian RED framework is its ability to provide a holistic view of the solution space, including uncertainty quantification. This added layer of insight is invaluable for various applications where understanding the confidence in the reconstructed images can guide subsequent decision-making processes. The method's ability to offer detailed uncertainty maps alongside high-quality reconstructions is a notable step forward in the computational imaging domain.

Concluding Remarks

The integration of RED into a Bayesian framework through the novel LwSGS algorithm opens up new pathways for leveraging advanced denoising engines in solving inverse problems. This work not only enhances the theoretical foundations bridging deterministic and probabilistic approaches but also sets a new benchmark in data-driven image restoration and reconstruction methodologies. As we move forward, the potential for further exploration and refinement of this approach promises exciting developments in the field of computational imaging.