Papers
Topics
Authors
Recent
2000 character limit reached

Adaptive Posterior Diffusion Sampling (AdaPS)

Updated 30 November 2025
  • Adaptive Posterior Diffusion Sampling (AdaPS) is a framework that employs diffusion models to iteratively adapt measurement strategies and update posterior estimates based on uncertainty.
  • It strategically selects measurements by quantifying posterior uncertainty, ensuring efficient sampling in challenging inverse problems such as compressed sensing and image restoration.
  • Variants like adaptive likelihood scaling, proximal candidate sampling, and adaptive-metric MCMC provide empirical improvements in reconstruction quality, reducing error and enhancing robustness.

Adaptive Posterior Diffusion Sampling (AdaPS) encompasses a family of frameworks and algorithms that leverage adaptive or data-dependent procedures to sample from complex posterior distributions using diffusion models. AdaPS finds critical application in compressed sensing, Bayesian inference, and general inverse problems, systematically updating the sampling or measurement process based on posterior uncertainty or fidelity to observed data. While differing instantiations of the term appear across the literature—including adaptive measurement selection in compressed sensing, adaptive guidance scaling in inverse-problem diffusion sampling, and adaptive-metric Markov Chain Monte Carlo (MCMC) for Bayesian posterior sampling—the unifying theme is iterative adaptation driven by posterior information, typically without additional training or parameter tuning.

1. Foundational Formulation and Principles

Across its variants, AdaPS is predicated on the use of diffusion models as generative or prior distributions, often for ill-posed problems where direct inversion is intractable. In the classical compressed sensing (CS) setting (Elata et al., 11 Jul 2024), the goal is to reconstruct xRDx \in \mathbb{R}^D from dDd \ll D measurements:

y=Hx+ε,εN(0,σ2I)y = Hx + \varepsilon,\qquad \varepsilon \sim \mathcal{N}(0, \sigma^2 I)

with HRd×DH \in \mathbb{R}^{d \times D} possibly determined adaptively. The Bayesian posterior is then

p(xy)pθ(x)p(yx)p(x \mid y) \propto p_\theta(x)\,p(y \mid x)

where pθ(x)p_\theta(x) is the diffusion-based prior, typically realized by a pre-trained Denoising Diffusion Probabilistic Model (DDPM). In inverse problems, the general observation model similarly takes y=Ax+εy = A x + \varepsilon for a known (possibly ill-conditioned) operator AA (Hen et al., 23 Nov 2025), and the diffusion prior guides the sampling trajectory toward plausible solutions.

Crucially, AdaPS modifies the standard (prior) reverse diffusion process by conditioning each sampling step on observed data, and—unlike traditional approaches—adapts key aspects of the sampling or measurement loop based on current posterior estimates or guidance statistics.

2. Posterior Sampling and Adaptive Measurement Selection

In compressed sensing, AdaPS enables adaptive acquisition by quantifying posterior uncertainty and greedily selecting measurements that maximize expected information gain. From a current set of partial observations y0:nry_{0:nr}, AdaPS generates ss samples from the posterior pθ(xy0:nr)p_\theta(x \mid y_{0:nr}) using a zero-shot diffusion sampler (e.g., DDRM). The empirical covariance of these samples serves as an estimate of the posterior uncertainty:

Cov^[xy0:nr]=1si=1sxˉixˉi,xˉi=xi1sjxj\widehat{\mathrm{Cov}}[x \mid y_{0:nr}] = \frac{1}{s} \sum_{i=1}^s \bar{x}_i \bar{x}_i^\top, \quad \bar{x}_i = x_i - \frac{1}{s} \sum_j x_j

Measurement selection in the unconstrained case reduces to choosing the top-rr eigenvectors of this covariance, while in constrained scenarios (e.g., selecting k-space lines in MRI), the optimal measurement is found by maximizing the expected posterior variance in feasible directions (Elata et al., 11 Jul 2024). AdaPS thus alternates:

  1. Sampling the current posterior,
  2. Estimating uncertainty,
  3. Selecting/allocating the next measurements for maximal expected error reduction.

This adapts measurement strategy to the actual posterior structure, outperforming fixed or random sensing in empirical evaluations.

3. Diffusion Guidance: Adaptive Likelihood Scaling in Inverse Problems

Recent advances deploy AdaPS in diffusion-based inverse problems where balancing the prior and likelihood is critical (Hen et al., 23 Nov 2025). The challenge arises because explicit likelihood gradients xtlogpt(yxt)\nabla_{x_t} \log p_t(y \mid x_t) are intractable; practitioners use surrogates such as the DPS (Dirac) or Π\PiGDM (Gaussian) approximations:

  • DPS: g1=σy2JtA(yAx^0)g_1 = -\sigma_y^{-2} J_t^\top A^\top (y - A \hat{x}_0)
  • Π\PiGDM: g2=JtA(rt2AA+σy2I)1(yAx^0)g_2 = J_t^\top A^\top (r_t^2 A A^\top + \sigma_y^2 I)^{-1}(y - A \hat{x}_0)

AdaPS derives an adaptive step size αt\alpha_t per iteration by aligning the residual between predicted and MAP noise (dtd_t) with the chosen guidance direction gtg_t, yielding:

αt=2γtdt,gtgt22\alpha_t = 2 \gamma_t \frac{\langle d_t, g_t \rangle}{\|g_t\|_2^2}

This data-dependent scaling eliminates the need for manual tuning, adapts naturally to the stochasticity parameter and number of diffusion steps, and empirically offers robust performance across varying observation noise and task settings (Hen et al., 23 Nov 2025).

The AdaPS algorithm in this context (see Section 4 table) iteratively updates:

  1. Diffusion prior sample,
  2. Estimate x^0\hat{x}_0,
  3. Compute guidance surrogate gtg_t and residual dtd_t,
  4. Adaptive guidance scaling αt\alpha_t,
  5. Combined update of state with adaptive data term.
Step Description Details / Key Formula
1 Prior/prediction and residuals x^0\hat{x}_0, dtd_t as in (Hen et al., 23 Nov 2025)
2 Compute surrogate guidance gtg_t DPS/Π\PiGDM as above
3 Compute adaptive scale αt\alpha_t 2γtdt,gt/gt222\gamma_t\langle d_t, g_t \rangle/\|g_t\|_2^2
4 Update sample xt1x_{t-1} \leftarrow prior step αtgt- \alpha_t g_t

4. Proximal and Candidate-Based Sampling Schemes

AdaPS has also been instantiated as Diffusion Posterior Proximal Sampling (DPPS) for image restoration (Wu et al., 25 Feb 2024), where, instead of a single stochastic proposal per step, nn candidate latents are sampled at each reverse step, and the candidate most consistent with the measurement constraint is selected:

xt1i=argminiA(xt1i)(C1xt+C2y)22x_{t-1}^{i^*} = \arg\min_i \| A(x_{t-1}^i) - (C_1 x_t + C_2 y) \|_2^2

Aligned initialization further improves convergence by combining measurement signal and noise. Empirical analysis shows that this proximal selection mechanism reduces variance, accelerates measurement-convergence, and, beyond a moderate value of nn, improvements become marginal while computational overhead grows sub-linearly (Wu et al., 25 Feb 2024).

5. Adaptive Metric MCMC for Bayesian Posteriors

The AdaPS concept also appears in the context of Bayesian neural network posterior sampling as adaptive-metric Langevin or preconditioned samplers (Rensmeyer et al., 13 Mar 2024). These methods employ local, parameter-adaptive step-size matrices (e.g., as in RMSprop or Adam), updating the sample according to local geometry encoded by running gradient statistics. However, unless the required Itô correction term Γ(θ)\Gamma(\theta) is included, such samplers generally do not converge to the true posterior. In dimensions where the full correction is omitted or downscaled, the resulting Markov process converges to a biased (distorted) invariant distribution:

π(θ)=Zp(θD)G(θ)α\pi(\theta) = Z p(\theta|D) G(\theta)^{-\alpha}

where G(θ)G(\theta) is the (possibly running-averaged) preconditioner and α\alpha parameterizes the exponential averaging. This bias persists even as step size ϵ0\epsilon \to 0, invalidating guarantees for exactness without full second-derivative computations (Rensmeyer et al., 13 Mar 2024).

6. Practical Considerations and Empirical Performance

Across instantiations:

  • No retraining: AdaPS typically requires only a pre-trained diffusion model, eschewing retraining or fine-tuning.
  • Tuning: Key hyperparameters, such as number of samples per adaptation (s,n)(s, n), block size rr, and number of adaptation steps NN, enable a trade-off between computational cost and fidelity (Elata et al., 11 Jul 2024, Wu et al., 25 Feb 2024). In practice, modest values (s1.3rs \approx 1.3 r, n=20n=20) suffice for near-optimal behavior.
  • Computational cost: Sampling-based approaches are more expensive than deterministic one-pass inversions; however, mechanisms such as proximal selection or accelerated DDPM variants (e.g., DDRM) keep runtimes tractable (Elata et al., 11 Jul 2024, Wu et al., 25 Feb 2024).
  • Robustness and performance: AdaPS methods outperform non-adaptive or heuristic sampling strategies across compressed sensing, MRI/CT, and standard image restoration, with statistically significant improvements in PSNR, SSIM, and LPIPS (Elata et al., 11 Jul 2024, Wu et al., 25 Feb 2024, Hen et al., 23 Nov 2025).
Application Domain Performance Gain (relative) Additional Notes
Faces (CS) +1.6 dB vs. PCA, +6 dB vs. random Unconstrained, mean-of-posterior improves further
MRI (FastMRI) +2 dB over Poisson-disk Outperforms hand-engineered downsampling
ImageNet SR/deblur Lower LPIPS, sharper details Hyperparameter-free guidance in adaptive schemes

7. Limitations, Open Questions, and Prospects

Prominent limitations across AdaPS frameworks include:

  • Dependence on the expressiveness and consistency of pre-trained diffusion samplers for accurate posterior characterization.
  • Sampling cost grows with posterior sample count.
  • Frameworks are currently best suited for linear inverse problems; extension to nonlinear or unknown operators remains an open research direction (Elata et al., 11 Jul 2024, Hen et al., 23 Nov 2025).
  • In adaptive-metric MCMC, computational feasibility of the required second-order correction hinders practical unbiasedness (Rensmeyer et al., 13 Mar 2024).

Promising directions involve leveraging distilled or parallelized samplers to amortize sampling cost, incorporating non-Gaussian likelihoods and non-linear operators, and deploying AdaPS in clinical and scientific scenarios for robust uncertainty quantification.


References:

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Adaptive Posterior Diffusion Sampling (AdaPS).