Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction (2403.09355v1)

Published 14 Mar 2024 in eess.IV and cs.CV

Abstract: Sparse-view Computed Tomography (CT) image reconstruction is a promising approach to reduce radiation exposure, but it inevitably leads to image degradation. Although diffusion model-based approaches are computationally expensive and suffer from the training-sampling discrepancy, they provide a potential solution to the problem. This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation (CDDM) framework, including the low-quality image generation in latent space and the high-quality image generation in pixel space which contains data consistency and discrepancy mitigation in a one-step reconstruction process. The cascaded framework minimizes computational costs by moving some inference steps from pixel space to latent space. The discrepancy mitigation technique addresses the training-sampling gap induced by data consistency, ensuring the data distribution is close to the original manifold. A specialized Alternating Direction Method of Multipliers (ADMM) is employed to process image gradients in separate directions, offering a more targeted approach to regularization. Experimental results across two datasets demonstrate CDDM's superior performance in high-quality image generation with clearer boundaries compared to existing methods, highlighting the framework's computational efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 843–852, 2023.
  2. Modular proximal optimization for multidimensional total-variation regularization. arXiv preprint arXiv:1411.0589, 2014.
  3. Modern regularization methods for inverse problems. Acta numerica, 27:1–111, 2018.
  4. Evaluation of sparse-view reconstruction from flat-panel-detector cone-beam ct. Physics in Medicine & Biology, 55(22):6575, 2010.
  5. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011.
  6. Anders Brahme. Comprehensive biomedical physics. Newnes, 2014.
  7. Learn: Learned experts’ assessment-based reconstruction network for sparse-data ct. IEEE transactions on medical imaging, 37(6):1333–1347, 2018.
  8. Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687, 2022a.
  9. Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems, 35:25683–25696, 2022b.
  10. Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12413–12422, 2022c.
  11. Fast diffusion sampler for inverse problems by geometric decomposition. arXiv preprint arXiv:2303.05754, 2023a.
  12. Solving 3d inverse problems using pre-trained 2d diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22542–22551, 2023b.
  13. A cone-beam x-ray computed tomography data collection designed for machine learning. Scientific data, 6(1):215, 2019.
  14. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
  15. Exploiting the signal-leak bias in diffusion models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4025–4034, 2024.
  16. Adaptive diffusion priors for accelerated mri reconstruction. Medical Image Analysis, page 102872, 2023.
  17. Framing u-net via deep convolutional framelets: Application to sparse-view ct. IEEE transactions on medical imaging, 37(6):1418–1429, 2018.
  18. Optimizing a parameterized plug-and-play admm for iterative low-dose ct reconstruction. IEEE transactions on medical imaging, 38(2):371–382, 2018.
  19. Iterative reconstruction based on latent diffusion model for sparse data reconstruction. arXiv preprint arXiv:2307.12070, 2023.
  20. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
  21. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  22. An improved statistical iterative algorithm for sparse-view and limited-angle ct image reconstruction. Scientific reports, 7(1):10747, 2017.
  23. Cagan: A cycle-consistent generative adversarial network with attention for low-dose ct imaging. IEEE Transactions on Computational Imaging, 6:1203–1218, 2020.
  24. Deep convolutional neural network for inverse problems in imaging. IEEE transactions on image processing, 26(9):4509–4522, 2017.
  25. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Advances in Neural Information Processing Systems, 34:13242–13254, 2021.
  26. Image reconstruction for sparse-view ct and interior ct—introduction to compressed sensing and differentiated backprojection. Quantitative imaging in medicine and surgery, 3(3):147, 2013.
  27. Deep-neural-network-based sinogram synthesis for sparse-view ct image reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences, 3(2):109–119, 2018.
  28. Alleviating exposure bias in diffusion models through sampling with shifted time steps. arXiv preprint arXiv:2305.15583, 2023.
  29. Adversarial sparse-view cbct artifact reduction. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I, pages 154–162. Springer, 2018.
  30. Dolce: A model-based probabilistic diffusion framework for limited-angle ct reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10498–10508, 2023.
  31. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022.
  32. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461–11471, 2022.
  33. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021.
  34. Low-dose ct image and projection dataset. Medical physics, 48(2):902–911, 2021.
  35. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
  36. Input perturbation reduces exposure bias in diffusion models. arXiv preprint arXiv:2301.11706, 2023.
  37. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023.
  38. Fast and flexible admm algorithms for trend filtering. Journal of Computational and Graphical Statistics, 25(3):839–858, 2016.
  39. Image reconstruction: From sparsity to data-adaptive methods and machine learning. Proceedings of the IEEE, 108(1):86–109, 2019.
  40. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
  41. Matteo Ronchetti. Torchradon: Fast differentiable routines for computed tomography. arXiv preprint arXiv:2009.14788, 2020.
  42. Solving linear inverse problems provably via posterior sampling with latent diffusion models. arXiv preprint arXiv:2307.00619, 2023.
  43. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022.
  44. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology, 53(17):4777, 2008.
  45. Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123, 2023.
  46. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a.
  47. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  48. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b.
  49. Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005, 2021.
  50. Deep admm-net for compressive sensing mri. Advances in neural information processing systems, 29, 2016.
  51. Zero-shot image restoration using denoising diffusion null-space model. arXiv preprint arXiv:2212.00490, 2022a.
  52. Medclip: Contrastive learning from unpaired medical images and text. arXiv preprint arXiv:2210.10163, 2022b.
  53. Wavelet-improved score-based generative model for medical imaging. IEEE Transactions on Medical Imaging, 2023.
  54. Debias the training of diffusion models. arXiv preprint arXiv:2310.08442, 2023.
  55. A sparse-view ct reconstruction method based on combination of densenet and deconvolution. IEEE transactions on medical imaging, 37(6):1407–1417, 2018.
  56. Dream: Diffusion rectification and estimation-adaptive models. arXiv preprint arXiv:2312.00210, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.