Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Diffusion Posterior Sampling for Noisy Inverse Problems (2503.10237v1)

Published 13 Mar 2025 in math.OC

Abstract: The pretrained diffusion model as a strong prior has been leveraged to address inverse problems in a zero-shot manner without task-specific retraining. Different from the unconditional generation, the measurement-guided generation requires estimating the expectation of clean image given the current image and the measurement. With the theoretical expectation expression, the crucial task of solving inverse problems is to estimate the noisy likelihood function at the intermediate image sample. Using the Tweedie's formula and the known noise model, the existing diffusion posterior sampling methods perform gradient descent step with backpropagation through the pretrained diffusion model. To alleviate the costly computation and intensive memory consumption of the backpropagation, we propose an alternative maximum-a-posteriori (MAP)-based surrogate estimator to the expectation. With this approach and further density approximation, the MAP estimator for linear inverse problem is the solution to a traditional regularized optimization, of which the loss comprises of data fidelity term and the diffusion model related prior term. Integrating the MAP estimator into a general denoising diffusion implicit model (DDIM)-like sampler, we achieve the general solving framework for inverse problems. Our approach highly resembles the existing $\Pi$GDM without the manifold projection operation of the gradient descent direction. The developed method is also extended to nonlinear JPEG decompression. The performance of the proposed posterior sampling is validated across a series of inverse problems, where both VP and VE SDE-based pretrained diffusion models are taken into consideration.

Summary

We haven't generated a summary for this paper yet.