Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ambient Diffusion Posterior Sampling: Solving Inverse Problems with Diffusion Models trained on Corrupted Data (2403.08728v1)

Published 13 Mar 2024 in cs.CV, cs.AI, and cs.LG

Abstract: We provide a framework for solving inverse problems with diffusion models learned from linearly corrupted data. Our method, Ambient Diffusion Posterior Sampling (A-DPS), leverages a generative model pre-trained on one type of corruption (e.g. image inpainting) to perform posterior sampling conditioned on measurements from a potentially different forward process (e.g. image blurring). We test the efficacy of our approach on standard natural image datasets (CelebA, FFHQ, and AFHQ) and we show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance. We further extend the Ambient Diffusion framework to train MRI models with access only to Fourier subsampled multi-coil MRI measurements at various acceleration factors (R=2, 4, 6, 8). We again observe that models trained on highly subsampled data are better priors for solving inverse problems in the high acceleration regime than models trained on fully sampled data. We open-source our code and the trained Ambient Diffusion MRI models: https://github.com/utcsilab/ambient-diffusion-mri .

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. “Solving Inverse Problems with Score-Based Generative Priors learned from Noisy Data” In arXiv preprint arXiv:2305.01166, 2023
  2. Hemant K. Aggarwal, Merry P. Mani and Mathews Jacob “MoDL: Model-Based Deep Learning Architecture for Inverse Problems” In IEEE Transactions on Medical Imaging 38.2, 2019, pp. 394–405 DOI: 10.1109/TMI.2018.2865356
  3. Brian D.O. Anderson “Reverse-time diffusion equation models” In Stochastic Processes and their Applications 12.3 Elsevier, 1982, pp. 313–326
  4. Richard G Baraniuk “Compressive sensing [lecture notes]” In IEEE signal processing magazine 24.4 IEEE, 2007, pp. 118–121
  5. Matthew Bendel, Rizwan Ahmad and Philip Schniter “A Regularized Conditional GAN for Posterior Sampling in Image Recovery Problems” In arXiv e-prints, 2022, pp. arXiv:2210.13389 DOI: 10.48550/arXiv.2210.13389
  6. “mrirecon/bart: version 0.8.00” Zenodo, 2022 DOI: 10.5281/zenodo.7110562
  7. Ashish Bora, Eric Price and Alexandros G Dimakis “AmbientGAN: Generative models from lossy measurements” In International conference on learning representations, 2018
  8. “Extracting training data from diffusion models” In 32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 5253–5270
  9. Dongdong Chen, Julián Tachella and Mike E. Davies “Robust Equivariant Imaging: A Fully Unsupervised Framework for Learning To Image From Noisy and Partial Measurements” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5647–5656
  10. “Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions” In arXiv preprint arXiv:2209.11215, 2022
  11. Sitan Chen, Giannis Daras and Alex Dimakis “Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for ddim-type samplers” In International Conference on Machine Learning, 2023, pp. 4462–4484 PMLR
  12. “Diffusion Posterior Sampling for General Noisy Inverse Problems” In The Eleventh International Conference on Learning Representations, 2023 URL: https://openreview.net/forum?id=OnD9zGAGT0k
  13. “Improving diffusion models for inverse problems using manifold constraints” In Advances in Neural Information Processing Systems 35, 2022, pp. 25683–25696
  14. “Fast Unsupervised MRI Reconstruction Without Fully-Sampled Ground Truth Data Using Generative Adversarial Networks” In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021, pp. 3971–3980 DOI: 10.1109/ICCVW54120.2021.00444
  15. “First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole” In The Astrophysical Journal Letters 875.1 The American Astronomical Society, 2019, pp. L4 DOI: 10.3847/2041-8213/ab0e85
  16. “Self-score: Self-supervised learning on score-based models for mri reconstruction” In arXiv preprint arXiv:2209.00835, 2022
  17. “Ambient Diffusion: Learning Clean Distributions from Corrupted Data” In arXiv preprint arXiv:2305.19256, 2023
  18. “SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation”, 2022 arXiv:2203.06823 [eess.IV]
  19. “Score-Based diffusion models as principled priors for inverse imaging” In arXiv preprint arXiv:2304.11751, 2023
  20. “Image Reconstruction without Explicit Priors” In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5 IEEE
  21. “Diffusion models as plug-and-play priors” In Advances in Neural Information Processing Systems 35, 2022, pp. 14715–14728
  22. “Learning a variational network for reconstruction of accelerated MRI data” In Magnetic Resonance in Medicine 79.6, 2018, pp. 3055–3071 DOI: https://doi.org/10.1002/mrm.26977
  23. “Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks”, 2019 arXiv:1810.03982 [cs.CV]
  24. “A Restoration Network as an Implicit Prior” arXiv:2310.01391, 2023
  25. “Robust compressed sensing mri with deep generative priors” In Advances in Neural Information Processing Systems 34, 2021, pp. 14938–14954
  26. “Elucidating the design space of diffusion-based generative models” In arXiv preprint arXiv:2206.00364, 2022
  27. “Denoising Diffusion Restoration Models” In Advances in Neural Information Processing Systems
  28. “GSURE-Based Diffusion Model Training with Corrupted Data” In arXiv preprint arXiv:2305.13128, 2023
  29. “AmbientFlow: Invertible generative models from incomplete, noisy measurements” In arXiv preprint arXiv:2309.04856, 2023
  30. Kwanyoung Kim and Jong Chul Ye “Noise2score: tweedie’s approach to self-supervised image denoising without clean images” In Advances in Neural Information Processing Systems 34, 2021, pp. 864–874
  31. Alexander Krull, Tim-Oliver Buchholz and Florian Jug “Noise2void-learning denoising from single noisy images” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 2129–2137
  32. “Noise2Noise: Learning image restoration without clean data” In arXiv preprint arXiv:1803.04189, 2018
  33. Michael Lustig, David Donoho and John M. Pauly “Sparse MRI: The application of compressed sensing for rapid MR imaging” In Magnetic Resonance in Medicine 58.6, 2007, pp. 1182–1195 DOI: https://doi.org/10.1002/mrm.21391
  34. Christopher A Metzler, Arian Maleki and Richard G Baraniuk “From denoising to compressed sensing” In IEEE Transactions on Information Theory 62.9 IEEE, 2016, pp. 5117–5144
  35. “A Theoretical Framework for Self-Supervised MR Image Reconstruction Using Sub-Sampling via Variable Density Noisier2Noise” In IEEE Transactions on Computational Imaging 9, 2023, pp. 707–720 DOI: 10.1109/TCI.2023.3299212
  36. “Self-Supervised Learning for Image Super-Resolution and Deblurring”, 2023 arXiv:2312.11232 [eess.IV]
  37. “Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models” In arXiv preprint arXiv:2212.03860, 2022
  38. “Solving inverse problems in medical imaging with score-based generative models” In arXiv preprint arXiv:2111.08005, 2021
  39. “Score-based generative modeling through stochastic differential equations” In arXiv preprint arXiv:2011.13456, 2020
  40. Julián Tachella, Dongdong Chen and Mike Davies “Sensing Theorems for Unsupervised Learning in Linear Inverse Problems”, 2022 arXiv:2203.12513 [stat.ML]
  41. Julián Tachella, Dongdong Chen and Mike Davies “Unsupervised Learning From Incomplete Measurements for Inverse Problems” In arXiv preprint arXiv:2201.12151, 2022
  42. “MRI Data: Undersampled Knees” In Undersampled Knees | MRI Data URL: http://old.mridata.org/undersampled/knees
  43. “FastMRI Prostate: A Publicly Available, Biparametric MRI Dataset to Advance Machine Learning for Prostate Cancer Imaging”, 2023 arXiv:2304.09254 [physics.med-ph]
  44. “ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA” In Magnetic Resonance in Medicine 71.3, 2014, pp. 990–1001 DOI: https://doi.org/10.1002/mrm.24751
  45. “The BART toolbox for computational magnetic resonance imaging” In Proc Intl Soc Magn Reson Med 24, 2016
  46. “K-band: Self-supervised MRI Reconstruction via Stochastic Gradient Descent over K-space Subsets” In arXiv e-prints, 2023, pp. arXiv:2308.02958 DOI: 10.48550/arXiv.2308.02958
  47. “Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data” In Magnetic Resonance in Medicine 84.6, 2020, pp. 3172–3191 DOI: https://doi.org/10.1002/mrm.28378
  48. Martin Zach, Florian Knoll and Thomas Pock “Stable Deep MRI Reconstruction using Generative Priors” In arXiv e-prints, 2022, pp. arXiv:2210.13834 DOI: 10.48550/arXiv.2210.13834
  49. “Fast pediatric 3D free-breathing abdominal dynamic contrast enhanced MRI with high spatiotemporal resolution” In Journal of Magnetic Resonance Imaging 41.2, 2015, pp. 460–473 DOI: https://doi.org/10.1002/jmri.24551
  50. “MRI Data: Undersampled Abdomens” In Undersampled Abdomens | MRI Data URL: http://old.mridata.org/undersampled/abdomens
  51. “Fast Training of Diffusion Models with Masked Transformers” In arXiv preprint arXiv:2306.09305, 2023
Citations (5)

Summary

  • The paper introduces a novel framework that trains diffusion models on corrupted data to solve inverse problems.
  • It demonstrates that models trained on corrupted images excel in tasks like compressed sensing and super-resolution.
  • It extends the approach to multi-coil MRI, outperforming traditional methods at high acceleration factors.

Exploring the Bounds of Diffusion Models in Solving Inverse Problems with Corrupted Training Data

Introduction to Ambient Diffusion Posterior Sampling

The field of generative models has seen significant advancement in recent years, notably with the adoption of Diffusion Models in solving complex inverse problems. Traditionally, these models have been trained on clean, fully-observed datasets. However, there exist situations where acquiring uncorrupted or fully observed data is challenging, urging the need for models that can learn from and solve inverse problems with corrupted data. Addressing this gap, the work on Ambient Diffusion Posterior Sampling (A-DPS) proposes a framework that not only trains Diffusion Models on corrupted data but also utilizes them to solve arbitrary inverse problems conditioned on linear measurements originating from different forward operators.

Training Diffusion Models on Corrupted Data

A significant contribution of this research is the methodological advancement in training generative models, specifically Diffusion Models, with linearly corrupted datasets. The methodology, dubbed Ambient Diffusion, explores the avenue of training these models with data subjected to various corruption processes (e.g., image inpainting). Through a nuanced training objective, it demonstrates that models trained on heavily corrupted samples can form better priors for inverse problem solving in highly corrupted regimes compared to their counterparts trained on clean data.

Evaluation on Standard Natural Image Datasets

The evaluation of A-DPS spans across several datasets of natural images, such as CelebA, FFHQ, and AFHQ. The experimentation targets two core tasks: compressed sensing and super-resolution, assessing both performance and speed. Interestingly, the models trained on corrupted data showcase an ability to outperform models fed with clean data when dealing with high levels of data corruption. This finding accentuates the potential of utilizing corrupted datasets for training, pushing the boundaries of generative models' applicability when clean data is scarce.

Extension to Multi-coil Fourier Subsampled MRI

One of the paper's pivotal extensions is applying the Ambient Diffusion framework to the domain of MRI imaging, a field where obtaining fully sampled data can be impractical. By innovatively training models directly on Fourier subsampled multi-coil MRI data, this work shed light on the efficacy of Ambient Diffusion Models in medical imaging. Remarkably, it was found that these models surpass ones trained on fully sampled data for inverse problems at high acceleration factors, thereby presenting an intriguing avenue for medical image reconstruction where data corruption is prevalent.

Theoretical Grounding and Practical Implications

The paper meticulously underpins its experimental findings with theoretical insights, proving that transforming corrupted into further corrupted representations and then training to predict the less corrupted original can mathematically lead to learning correct estimations. This theoretical foundation lends significant credibility to the ambient training methodology, providing a pathway for future research in generative model training with corrupted datasets.

Conclusion and Future Directions

The advent of Ambient Diffusion Posterior Sampling marks a crucial step towards leveraging corrupted or partially observed data to train powerful generative models for solving intricate inverse problems. Beyond natural image restoration tasks, its successful application to MRI reconstruction exemplifies the broader implications for medical imaging and potentially other domains where data corruption is a norm. As generative models continue to evolve, the exploration of training and application paradigms that embrace data imperfections will be paramount, and A-DPS stands as a seminal work guiding this endeavor.

X Twitter Logo Streamline Icon: https://streamlinehq.com