Diffusion Posterior Sampling for Inverse Problems
- Diffusion Posterior Sampling is a framework that uses diffusion models as generative priors for robust inference in noisy and nonlinear inverse problems.
- It integrates a learned score network with explicit measurement noise modeling to avoid hard projections and mitigate noise amplification.
- The method has been validated on tasks like phase retrieval and deblurring, achieving improved performance over traditional projection-based approaches.
Diffusion Posterior Sampling (DPS) is a principled algorithmic framework that leverages diffusion models as generative priors for solving general noisy (non)linear inverse problems. DPS differentiates itself from earlier generative approaches by efficiently combining a learned score-based prior with explicit measurement noise modeling, thereby enabling robust and statistically meaningful inference even under measurement corruption and nonlinearities.
1. Foundational Principles of Diffusion Posterior Sampling
DPS addresses the problem of sampling from the posterior distribution in settings where is an unknown signal (such as a clean image), and represents noisy, possibly nonlinear, measurements governed by a forward operator and noise characteristics (e.g., Gaussian, Poisson).
Diffusion models, trained as score-based generative models, define a forward SDE that incrementally adds noise to data. The reverse SDE, parameterized by a learned neural "score" network, enables progressive denoising and ultimately sample generation conditional on an arbitrary prior.
Traditional diffusion inverse problem solvers often focus on noiseless and linear settings, performing hard-projection steps for measurement consistency. DPS transcends this limitation by incorporating measurement statistics directly into the sampling process and operating on general (non)linear measurement models, thus aptly reflecting the regime encountered in practical applications such as scientific imaging, medical data analysis, and phase retrieval.
2. Methodological Framework
DPS achieves conditional sampling via a modification of the diffusion model's reverse SDE to incorporate the data likelihood, yielding an approximate posterior sampling process:
The key methodological steps are:
- Score Modeling: The prior is learned as a time-conditioned score function; .
- Approximate Likelihood Gradient: Direct computation of is generally intractable. DPS approximates this term by inferring the posterior mean (via Tweedie's formula) and evaluating the likelihood at this mean:
- Implementation for Noise Models: For measurements with Gaussian noise, the likelihood gradient reduces (up to constants) to:
where . For Poisson noise and more general structured noise models, analogous forms are used.
- No Strict Projection: DPS fundamentally avoids hard projection after each sampling step. Instead, observation information is incorporated via a blended statistical gradient, preventing repeated noise amplification and drift away from the data manifold—problems which are pronounced in projection-based methods under noisy measurements.
- Algorithmic Implementation: Full discrete algorithms are provided in the form of Algorithm 1 and 2 in the reference. Gradients are computed via automatic differentiation.
3. Handling Noisy and Nonlinear Inverse Problems
DPS generalizes to:
- Gaussian or Poisson noise: Explicit likelihood gradients support structured, signal-dependent, or physically motivated noise models.
- Nonlinear or complex linear operators: Since gradients pass through and can be computed via autodiff, DPS operates on nonlinear measurement problems (e.g., Fourier phase retrieval, nonlinear deblurring).
- Examples: The framework is validated on Fourier phase retrieval, non-uniform deblurring, and classical inpainting/super-resolution tasks, all under realistic noise.
This broad applicability demonstrates the versatility and robustness of DPS well beyond previous denoising diffusion approaches.
4. Comparative Performance and Empirical Results
DPS achieves strong empirical performance advantages:
- Noise Robustness: DPS surpasses Score-SDE, ILVR/MCG, and SVD-based DDRM in FID and LPIPS across moderate to high noise scenarios, avoiding the perceptual and artifact failures of projection methods. Projection-based methods often amplify observed noise, deteriorating quality at each iteration.
- Generality: DPS is not restricted to linear problems or tractable forward decompositions (such as those requiring SVDs), handling both linear and nonlinear operators directly in the image (data) domain.
- Quantitative Results: Tables 1 and 2 show that DPS consistently achieves top-tier or second-best scores, on datasets including FFHQ and ImageNet, across noise and measurement conditions.
- Qualitative Improvements: DPS produces sharper, more realistic reconstructions, preserving details lost by aggressive measurement consistency enforcement.
These findings imply that DPS can serve as a reliable inference backbone in both research and application settings, especially where uncertainty, noise, and complex forward models dominate.
5. Implementation, Reproducibility, and Resource Considerations
DPS is made available as open source at https://github.com/DPS2022/diffusion-posterior-sampling.
- Reproducibility: Complete source code, pretrained diffusion score models (for major datasets), and all step size heuristics are provided. Experimental configurations as reported in the paper ensure that results are reproducible.
- Hardware and Computational Requirements: The reference includes hardware specification and runtime analysis for various problem classes, indicating feasibility for research and real-world deployment.
- Transparency: The methodology and algorithmic subtleties, including the avoidance of hard projection, step sizing, and likelihood gradient implementation, are described in detail in both the main text and the appendix.
6. Summary Comparison with Prior Approaches
Property | Score-SDE / ILVR / MCG | DDRM (SVD-based) | DPS (Ours) |
---|---|---|---|
Handles noise | Poorly | Only in some | Yes (arbitrary) |
Nonlinear forward operator | No | No | Yes |
Data domain | Image | SVD/spectral | Image |
Projection step | Hard (may amplify) | SVD inversion | No hard proj. |
Generality | Limited | Limited | General |
This table clarifies both the methodological innovations and expanded problem coverage of DPS in comparison to prior generative and model-based solvers.
7. Implications and Outlook
DPS establishes a general-purpose, noise-aware framework for inverse problems with diffusion models. Key implications include:
- Broader applicability: The ability to handle arbitrary measurement noise and complex operators opens avenues in fields like medical imaging (MRI, CT, phase contrast), computational photography, and scientific instrumentation.
- Foundational role for future work: As diffusion samplers advance in speed and efficiency, DPS provides a sound blueprint for statistical inference, uncertainty quantification, and Bayesian restoration in new domains.
- Research catalyst: The geometric and statistical insight underlying DPS's avoidance of hard projection informs new work in stable, plausible posterior sampling—particularly for the noisy and nonlinear regimes typical in real-world inference.
For technical details, stepwise procedures, and experimental code, refer to the open resource at https://github.com/DPS2022/diffusion-posterior-sampling.