Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion Posterior Proximal Sampling for Image Restoration (2402.16907v2)

Published 25 Feb 2024 in eess.IV, cs.CV, and cs.LG

Abstract: Diffusion models have demonstrated remarkable efficacy in generating high-quality samples. Existing diffusion-based image restoration algorithms exploit pre-trained diffusion models to leverage data priors, yet they still preserve elements inherited from the unconditional generation paradigm. These strategies initiate the denoising process with pure white noise and incorporate random noise at each generative step, leading to over-smoothed results. In this paper, we present a refined paradigm for diffusion-based image restoration. Specifically, we opt for a sample consistent with the measurement identity at each generative step, exploiting the sampling selection as an avenue for output stability and enhancement. The number of candidate samples used for selection is adaptively determined based on the signal-to-noise ratio of the timestep. Additionally, we start the restoration process with an initialization combined with the measurement signal, providing supplementary information to better align the generative process. Extensive experimental results and analyses validate that our proposed method significantly enhances image restoration performance while consuming negligible additional computational resources.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. In International Conference on Learning Representations, 2021.
  2. Linear convergence bounds for diffusion models via stochastic localization. arXiv preprint arXiv:2308.03686, 2023.
  3. The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  6228–6237, 2018.
  4. Deep equilibrium diffusion restoration with parallel sampling. arXiv preprint arXiv:2311.11600, 2023.
  5. Plug-and-play admm for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging, 3(1):84–98, 2016.
  6. Score-based diffusion models for accelerated mri. Medical Image Analysis, pp.  102479, 2022.
  7. Improving diffusion models for inverse problems using manifold constraints. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=nJJjv0JDJju.
  8. Diffusion posterior sampling for general noisy inverse problems. In International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=OnD9zGAGT0k.
  9. Prompt-tuning latent diffusion models for inverse problems. ArXiv, abs/2310.01110, 2023b.
  10. Diffdock: Diffusion steps, twists, and turns for molecular docking. In International Conference on Learning Representations (ICLR 2023), 2023.
  11. Diffusion models beat GANs on image synthesis. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021.
  12. Efron, B. Tweedie’s formula and selection bias. Journal of the American Statistical Association, 106(496):1602–1614, 2011.
  13. Exploiting the signal-leak bias in diffusion models. arXiv preprint arXiv:2309.15842, 2023.
  14. Iterative reconstruction based on latent diffusion model for sparse data reconstruction. arXiv preprint arXiv:2307.12070, 2023.
  15. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Neural Information Processing Systems, 2017.
  16. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
  17. Gotta go fast when generating data with score-based models. arXiv preprint arXiv:2105.14080, 2021.
  18. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4401–4410, 2019.
  19. Elucidating the design space of diffusion-based generative models. In Proc. NeurIPS, 2022.
  20. Denoising diffusion restoration models. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=kxXvopt9pWK.
  21. Alleviating exposure bias in diffusion models through sampling with shifted time steps. arXiv preprint arXiv:2305.15583, 2023.
  22. Diffusion-lm improves controllable text generation. Advances in Neural Information Processing Systems, 35:4328–4343, 2022.
  23. Diverse image generation via self-conditioned gans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  14286–14295, 2020.
  24. Solving diffusion odes with optimal boundary conditions for better image super-resolution. arXiv preprint arXiv:2305.15357, 2023.
  25. SDEdit: Image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021.
  26. Application of the pseudoinverse computation in reconstruction of blurred images. Filomat, 26(3):453–465, 2012.
  27. Input perturbation reduces exposure bias in diffusion models. arXiv preprint arXiv:2301.11706, 2023.
  28. Proximal algorithms. Foundations and Trends in optimization, 1(3):127–239, 2014.
  29. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10684–10695, 2022.
  30. Beyond first-order tweedie: Solving inverse problems using latent diffusion. ArXiv, abs/2312.00852, 2023a.
  31. Solving linear inverse problems provably via posterior sampling with latent diffusion models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023b.
  32. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
  33. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636, 2021.
  34. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pp.  1–10, 2022.
  35. On conditioning the input noise for controlled image generation with diffusion models. arXiv preprint arXiv:2205.03859, 2022.
  36. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp.  2256–2265. PMLR, 2015.
  37. Solving inverse problems with latent diffusion models via hard data consistency. ArXiv, abs/2307.08123, 2023a.
  38. Denoising diffusion implicit models. In 9th International Conference on Learning Representations, ICLR, 2021a.
  39. Pseudoinverse-guided diffusion models for inverse problems. In International Conference on Learning Representations, 2023b. URL https://openreview.net/forum?id=9_gsMA8MRKQ.
  40. Loss-guided diffusion models for plug-and-play controllable generation. In International Conference on Machine Learning, 2023c.
  41. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=PxTIG12RRHS.
  42. Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=vaRCHVj0uGI.
  43. Stein, C. M. Estimation of the mean of a multivariate normal distribution. The annals of Statistics, pp.  1135–1151, 1981.
  44. Score-based generative modeling in latent space. Advances in Neural Information Processing Systems, 34:11287–11302, 2021.
  45. Vincent, P. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011.
  46. Zero-shot image restoration using denoising diffusion null-space model. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=mRieQgMtNTQ.
  47. Tackling the generative learning trilemma with denoising diffusion gans. arXiv preprint arXiv:2112.07804, 2021.
  48. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE transactions on image processing, 26(7):3142–3155, 2017.
  49. The unreasonable effectiveness of deep features as a perceptual metric. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  586–595, 2018.
  50. Denoising diffusion models for plug-and-play image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1219–1229, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hongjie Wu (4 papers)
  2. Linchao He (4 papers)
  3. Mingqin Zhang (1 paper)
  4. Dongdong Chen (164 papers)
  5. Kunming Luo (18 papers)
  6. Mengting Luo (4 papers)
  7. Ji-Zhe Zhou (3 papers)
  8. Hu Chen (44 papers)
  9. Jiancheng Lv (99 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.