Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 36 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 66 tok/s Pro
Kimi K2 170 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Denoising Diffusion Probabilistic Models (DDPM)

Updated 22 September 2025
  • Denoising Diffusion Probabilistic Models (DDPMs) are generative models that reverse noise addition through a learned iterative process to reconstruct high-quality data from corrupted inputs.
  • They employ a two-stage process with a forward diffusion (noise addition) and a reverse denoising mechanism using neural networks, outperforming traditional methods like nonlocal means and UNet-based frameworks.
  • When integrated with PET and MR imaging, DDPMs enhance anatomical detail, reduce bias, and provide uncertainty maps, essential for precise clinical decision-making.

Denoising Diffusion Probabilistic Models (DDPMs) are a class of deep generative models that synthesize data by learning to invert a noise-adding Markov process. Through iterative refinement, a DDPM transforms samples from a simple distribution (such as an isotropic Gaussian) into samples from the empirical data distribution. The framework is characterized by a two-stage process: a forward diffusion process that progressively corrupts clean data with noise, and a reverse diffusion process—parameterized by a neural network—that learns to reconstruct clean data from noisy inputs. This probabilistic approach has found applications in areas such as image, signal, and medical data denoising, generative synthesis, and uncertainty quantification.

1. Probabilistic Model Formulation

The foundational structure of a DDPM comprises two Markov chains: the forward ("noising") chain and the learned reverse ("denoising") chain.

  • Forward diffusion (noising): At each timestep tt, the data is perturbed by Gaussian noise:

q(xtxt1)=N(xt;1βtxt1,βtI)q(x_t|x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t}\, x_{t-1}, \beta_t I)

Defining αt=1βt\alpha_t = 1-\beta_t and αt=s=1tαs\overline{\alpha}_t = \prod_{s=1}^t \alpha_s, the process can be reparameterized as:

q(xtx0)=N(xt;αtx0,(1αt)I)q(x_t | x_0) = \mathcal{N}(x_t; \sqrt{\overline{\alpha}_t}\, x_0, (1-\overline{\alpha}_t) I)

This allows the direct sampling of xtx_t from x0x_0 and random noise ϵN(0,I)\epsilon \sim \mathcal{N}(0, I).

  • Reverse diffusion (denoising): The model learns Gaussian transitions that sequentially invert the noising process:

pθ(xt1xt)=N(xt1;μ~θ(xt,t),σt2I)p_\theta(x_{t-1} | x_t) = \mathcal{N}(x_{t-1}; \tilde{\mu}_\theta(x_t,t), \sigma_t^{2} I)

with

μ~θ(xt,t)=1αt(xtβt1αtϵθ(xt,t))\tilde{\mu}_\theta(x_t, t) = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\beta_t}{\sqrt{1-\overline{\alpha}_t}}\, \epsilon_\theta(x_t, t) \right)

Here, ϵθ(xt,t)\epsilon_\theta(x_t, t) is a neural network trained to predict the noise added to x0x_0 at timestep tt.

This framework yields a generative procedure in which a sample is iteratively refined from noise (xTN(0,I)x_T \sim \mathcal{N}(0,I)) toward a realistic data point by following the learned reverse transitions.

2. Conditioning on Prior Information

For practical denoising applications, such as in positron emission tomography (PET) image enhancement, the DDPM framework is extended to incorporate external or prior information:

  • Direct conditioning: The neural network predicting ϵθ\epsilon_\theta is augmented to accept additional inputs—such as the noisy PET image and/or anatomical priors from MR images—yielding a conditional model of the form:

ϵθ(xt,t,xnoisy,xprior)\epsilon_\theta(x_t, t, x_\text{noisy}, x_\text{prior})

Method variants include: - DDPM-PET: noisy PET image as input. - DDPM-PETMR: concatenated noisy PET and MR prior as input channels.

  • Data-consistency constraint: In cases where an anatomical prior (e.g., MR) is provided as the main input, but fidelity to the observed noisy PET must be enforced, the reverse step is modified by an additional correction term:

xt1=1αt(xtβt1αtϵθ(xt,t,xprior))σt2σd2(xnoisyxt)+σtzx_{t-1} = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\beta_t}{\sqrt{1-\overline{\alpha}_t}}\, \epsilon_\theta(x_t, t, x_\text{prior}) \right) - \frac{\sigma_t^2}{\sigma_d^2} (x_\text{noisy} - x_t) + \sigma_t z

where σd2\sigma_d^2 is the estimated noise variance of the PET data. This approach is especially effective under varying noise levels.

3. Comparison with Conventional Denoising Methods

DDPM-based denoising with PET and/or MR prior information is systematically compared against:

Method Main Mechanism Output & Limitations
Nonlocal Means Similarity-based direct filtering Typically over-smooth, modest anatomical detail
Unet-based Deterministic, deep convolutional Over-smoothing, single deterministic output
DDPM-based Iterative probabilistic denoising Multiple stochastic outputs, uncertainty quant.

Unlike classic methods, DDPMs model the data distribution directly and can generate multiple plausible denoised outputs, enabling uncertainty quantification and improved anatomical fidelity.

4. Quantitative Results and Findings

The studied frameworks were rigorously evaluated on 120 18F-FDG PET datasets and 140 18F-MK-6240 datasets with co-registered MR images:

  • Performance metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), local error maps, and uncertainty quantification via repeated sampling.
  • Key findings:
    • DDPM-PET outperforms NLM and Unet-based denoising in both PSNR and SSIM, indicating superior preservation of fine and structural image details.
    • The inclusion of MR prior (DDPM-PETMR) further improves global image metrics.
    • Relying solely on MR prior (DDPM-MR) can lead to artifacts or bias, especially in regions of high PET uptake.
    • The hybrid approach (DDPM-MR-PETCon), combining MR input and PET data-consistency constraints, yields the lowest regional errors and eliminates bias, particularly in areas with a high dynamic range.
    • The probabilistic nature of DDPMs allows estimation of uncertainty maps, with the uncertainty further reduced by integrating MR priors and PET data consistency.

5. Theoretical Formulation and Model Variants

The central mathematical mechanisms underpinning the adapted DDPMs for PET image denoising are as follows:

  • Forward process:

q(xtxt1)=N(xt;1βtxt1,βtI)q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t} x_{t-1}, \beta_t I)

q(xtx0)=N(xt;αtx0,(1αt)I)q(x_t | x_0) = \mathcal{N}(x_t; \sqrt{\overline{\alpha}_t} x_0, (1-\overline{\alpha}_t) I)

  • Reverse process (vanilla):

xt1=1αt(xtβt1αtϵθ(xt,t))+σtzx_{t-1} = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\beta_t}{\sqrt{1-\overline{\alpha}_t}}\, \epsilon_\theta(x_t, t) \right) + \sigma_t z

  • Reverse process with data-consistency:

xt1=1αt(xtβt1αtϵθ(xt,t,xprior))σt2σd2(xnoisyxt)+σtzx_{t-1} = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\beta_t}{\sqrt{1-\overline{\alpha}_t}}\, \epsilon_\theta(x_t, t, x_\text{prior}) \right) - \frac{\sigma_t^2}{\sigma_d^2}(x_\text{noisy} - x_t) + \sigma_t z

where zN(0,I)z \sim \mathcal{N}(0, I) is independent Gaussian noise.

6. Clinical and Practical Implications

  • Flexibility: The DDPM framework accommodates arbitrary combinations of prior information and is robust to varying levels of PET noise, obviating the need for specifically paired training sets.
  • Uncertainty quantification: By generating multiple posterior samples, practitioners can compute uncertainty maps, aiding clinical decision making and supporting longitudinal imaging analysis.
  • Superiority in denoising: Across global and local quantitative metrics, DDPM-based methods consistently outperform NLM and Unet-based methods, showing enhanced anatomical detail and lower uncertainty, especially when MR priors and data-consistency constraints are employed.

7. Summary

Denoising Diffusion Probabilistic Models, when adapted to leverage both intrinsic noisy PET data and MR anatomical priors, constitute a highly effective, flexible approach for low-dose PET image denoising. By formulating the PET denoising as a probabilistic iterative refinement task conditioned on all available information and, when appropriate, incorporating explicit data consistency constraints, these models substantially improve image quality and reduce bias and uncertainty relative to conventional techniques. The architectural adaptability and inherent ability for uncertainty estimation make DDPM-based frameworks particularly advantageous in clinical imaging scenarios where both fidelity and reliability are essential (Gong et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Denoising Diffusion Probabilistic Models (DDPM).