Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 94 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Evaluating the Posterior Sampling Ability of Plug&Play Diffusion Methods in Sparse-View CT (2410.21301v2)

Published 21 Oct 2024 in eess.IV, cs.AI, cs.CV, and cs.LG

Abstract: Plug&Play (PnP) diffusion models are state-of-the-art methods in computed tomography (CT) reconstruction. Such methods usually consider applications where the sinogram contains a sufficient amount of information for the posterior distribution to be concentrated around a single mode, and consequently are evaluated using image-to-image metrics such as PSNR/SSIM. Instead, we are interested in reconstructing compressible flow images from sinograms having a small number of projections, which results in a posterior distribution no longer concentrated or even multimodal. Thus, in this paper, we aim at evaluating the approximate posterior of PnP diffusion models and introduce two posterior evaluation properties. We quantitatively evaluate three PnP diffusion methods on three different datasets for several numbers of projections. We surprisingly find that, for each method, the approximate posterior deviates from the true posterior when the number of projections decreases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. “Deep unsupervised learning using nonequilibrium thermodynamics,” in International conference on machine learning. PMLR, 2015, pp. 2256–2265.
  2. “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  3. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
  4. “Score-based generative modeling through stochastic differential equations,” in International Conference on Learning Representations, 2021.
  5. “Variational diffusion models,” Advances in neural information processing systems, vol. 34, pp. 21696–21707, 2021.
  6. Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” Advances in neural information processing systems, vol. 32, 2019.
  7. “SNIPS: Solving noisy inverse problems stochastically,” Advances in Neural Information Processing Systems, vol. 34, pp. 21757–21769, 2021.
  8. “Denoising diffusion restoration models,” in Advances in Neural Information Processing Systems, 2022.
  9. “Solving inverse problems in medical imaging with score-based generative models,” in ICLR. 2022, OpenReview.net.
  10. “Improving diffusion models for inverse problems using manifold constraints,” in Advances in Neural Information Processing Systems, 2022.
  11. “Diffusion posterior sampling for general noisy inverse problems,” in International Conference on Learning Representations, 2023.
  12. “Pseudoinverse-guided diffusion models for inverse problems,” in International Conference on Learning Representations, 2023.
  13. “Ilvr: Conditioning method for denoising diffusion probabilistic models,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021, pp. 14347–14356.
  14. A. C. Kak and M. Slaney, Principles of computerized tomographic imaging, SIAM, 2001.
  15. “Image super-resolution via iterative refinement,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 4, pp. 4713–4726, 2022.
  16. Y. Xie and Q. Li, “Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2022, pp. 655–664.
  17. J. Ho and T. Salimans, “Classifier-free diffusion guidance,” in NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
  18. W. Wu and Y. Wang, “Data-iterative optimization score model for stable ultra-sparse-view ct reconstruction,” arXiv preprint arXiv:2308.14437, 2023.
  19. “DPER: Diffusion prior driven neural representation for limited angle and sparse view CT reconstruction,” arXiv preprint arXiv:2404.17890, 2024.
  20. “Generative modeling in sinogram domain for sparse-view ct reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, 2023.
  21. “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  22. “Three-dimensional density field of a screeching under-expanded jet in helical mode using multi-view digital holographic interferometry,” Journal of Fluid Mechanics, vol. 947, pp. A36, 2022.
  23. “Coil: Coordinate-based internal learning for tomographic imaging,” IEEE Transactions on Computational Imaging, vol. 7, pp. 1400–1412, 2021.
  24. “Self-supervised coordinate projection network for sparse-view computed tomography,” IEEE Transactions on Computational Imaging, vol. 9, pp. 517–529, 2023.
  25. “Deep convolutional neural network for inverse problems in imaging,” IEEE transactions on image processing, vol. 26, no. 9, pp. 4509–4522, 2017.
  26. “MSDiff: Multi-scale diffusion model for ultra-sparse view CT reconstruction,” arXiv preprint arXiv:2405.05814, 2024.
  27. “Learning to distill global representation for sparse-view ct,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2023, pp. 21196–21207.
  28. “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11461–11471.
  29. “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” Advances in neural information processing systems, vol. 30, 2017.
  30. “Demystifying mmd gans,” arXiv preprint arXiv:1801.01401, 2018.
  31. P. Vincent, “A connection between score matching and denoising autoencoders,” Neural computation, vol. 23, no. 7, pp. 1661–1674, 2011.
  32. “Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12413–12422.
  33. “Dolce: A model-based probabilistic diffusion framework for limited-angle ct reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 10498–10508.
  34. “Bayesian conditioned diffusion models for inverse problems,” arXiv preprint arXiv:2406.09768, 2024.
  35. “Neumann networks for linear inverse problems in imaging,” IEEE Transactions on Computational Imaging, vol. 6, pp. 328–343, 2019.
  36. “Rethinking fid: Towards a better evaluation metric for image generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 9307–9315.
  37. M. J. Chong and D. Forsyth, “Effectively unbiased fid and inception score and where to find them,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6070–6079.
  38. P. Grenson and H. Deniau, “Large-Eddy simulation of an impinging heated jet for a small nozzle-to-plate distance and high Reynolds number,” International Journal of Heat and Fluid Flow, vol. 68, pp. 348–363, 2017.
  39. C. McCollough, “Overview of the low dose CT grand challenge,” Medical physics, vol. 43, no. 6Part35, pp. 3759–3760, 2016.
  40. “The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on ct scans,” Medical physics, vol. 38, no. 2, pp. 915–931, 2011.
  41. “The cancer imaging archive (TCIA): maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, pp. 1045–1057, 2013.
  42. M. Ronchetti, “TorchRadon: Fast differentiable routines for computed tomography,” arXiv preprint arXiv:2009.14788, 2020.
  43. “Denoising diffusion implicit models,” in International Conference on Learning Representations, 2021.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.