Denoising Diffusion Probabilistic Models (DDPM)
- Denoising Diffusion Probabilistic Models (DDPMs) are generative models that reverse noise addition through a learned iterative process to reconstruct high-quality data from corrupted inputs.
- They employ a two-stage process with a forward diffusion (noise addition) and a reverse denoising mechanism using neural networks, outperforming traditional methods like nonlocal means and UNet-based frameworks.
- When integrated with PET and MR imaging, DDPMs enhance anatomical detail, reduce bias, and provide uncertainty maps, essential for precise clinical decision-making.
Denoising Diffusion Probabilistic Models (DDPMs) are a class of deep generative models that synthesize data by learning to invert a noise-adding Markov process. Through iterative refinement, a DDPM transforms samples from a simple distribution (such as an isotropic Gaussian) into samples from the empirical data distribution. The framework is characterized by a two-stage process: a forward diffusion process that progressively corrupts clean data with noise, and a reverse diffusion process—parameterized by a neural network—that learns to reconstruct clean data from noisy inputs. This probabilistic approach has found applications in areas such as image, signal, and medical data denoising, generative synthesis, and uncertainty quantification.
1. Probabilistic Model Formulation
The foundational structure of a DDPM comprises two Markov chains: the forward ("noising") chain and the learned reverse ("denoising") chain.
- Forward diffusion (noising): At each timestep , the data is perturbed by Gaussian noise:
Defining and , the process can be reparameterized as:
This allows the direct sampling of from and random noise .
- Reverse diffusion (denoising): The model learns Gaussian transitions that sequentially invert the noising process:
with
Here, is a neural network trained to predict the noise added to at timestep .
This framework yields a generative procedure in which a sample is iteratively refined from noise () toward a realistic data point by following the learned reverse transitions.
2. Conditioning on Prior Information
For practical denoising applications, such as in positron emission tomography (PET) image enhancement, the DDPM framework is extended to incorporate external or prior information:
- Direct conditioning: The neural network predicting is augmented to accept additional inputs—such as the noisy PET image and/or anatomical priors from MR images—yielding a conditional model of the form:
Method variants include: - DDPM-PET: noisy PET image as input. - DDPM-PETMR: concatenated noisy PET and MR prior as input channels.
- Data-consistency constraint: In cases where an anatomical prior (e.g., MR) is provided as the main input, but fidelity to the observed noisy PET must be enforced, the reverse step is modified by an additional correction term:
where is the estimated noise variance of the PET data. This approach is especially effective under varying noise levels.
3. Comparison with Conventional Denoising Methods
DDPM-based denoising with PET and/or MR prior information is systematically compared against:
Method | Main Mechanism | Output & Limitations |
---|---|---|
Nonlocal Means | Similarity-based direct filtering | Typically over-smooth, modest anatomical detail |
Unet-based | Deterministic, deep convolutional | Over-smoothing, single deterministic output |
DDPM-based | Iterative probabilistic denoising | Multiple stochastic outputs, uncertainty quant. |
Unlike classic methods, DDPMs model the data distribution directly and can generate multiple plausible denoised outputs, enabling uncertainty quantification and improved anatomical fidelity.
4. Quantitative Results and Findings
The studied frameworks were rigorously evaluated on 120 18F-FDG PET datasets and 140 18F-MK-6240 datasets with co-registered MR images:
- Performance metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), local error maps, and uncertainty quantification via repeated sampling.
- Key findings:
- DDPM-PET outperforms NLM and Unet-based denoising in both PSNR and SSIM, indicating superior preservation of fine and structural image details.
- The inclusion of MR prior (DDPM-PETMR) further improves global image metrics.
- Relying solely on MR prior (DDPM-MR) can lead to artifacts or bias, especially in regions of high PET uptake.
- The hybrid approach (DDPM-MR-PETCon), combining MR input and PET data-consistency constraints, yields the lowest regional errors and eliminates bias, particularly in areas with a high dynamic range.
- The probabilistic nature of DDPMs allows estimation of uncertainty maps, with the uncertainty further reduced by integrating MR priors and PET data consistency.
5. Theoretical Formulation and Model Variants
The central mathematical mechanisms underpinning the adapted DDPMs for PET image denoising are as follows:
- Forward process:
- Reverse process (vanilla):
- Reverse process with data-consistency:
where is independent Gaussian noise.
6. Clinical and Practical Implications
- Flexibility: The DDPM framework accommodates arbitrary combinations of prior information and is robust to varying levels of PET noise, obviating the need for specifically paired training sets.
- Uncertainty quantification: By generating multiple posterior samples, practitioners can compute uncertainty maps, aiding clinical decision making and supporting longitudinal imaging analysis.
- Superiority in denoising: Across global and local quantitative metrics, DDPM-based methods consistently outperform NLM and Unet-based methods, showing enhanced anatomical detail and lower uncertainty, especially when MR priors and data-consistency constraints are employed.
7. Summary
Denoising Diffusion Probabilistic Models, when adapted to leverage both intrinsic noisy PET data and MR anatomical priors, constitute a highly effective, flexible approach for low-dose PET image denoising. By formulating the PET denoising as a probabilistic iterative refinement task conditioned on all available information and, when appropriate, incorporating explicit data consistency constraints, these models substantially improve image quality and reduce bias and uncertainty relative to conventional techniques. The architectural adaptability and inherent ability for uncertainty estimation make DDPM-based frameworks particularly advantageous in clinical imaging scenarios where both fidelity and reliability are essential (Gong et al., 2022).