Papers
Topics
Authors
Recent
Search
2000 character limit reached

SPADE-LDM: 3D Cardiac MRI Synthesis

Updated 15 January 2026
  • SPADE-LDM is a 3D conditional generative framework that synthesizes high-fidelity LGE cardiac MRI volumes from composite semantic masks encoding anatomical labels and tissue clusters.
  • Its two-stage architecture combines a 3D convolutional autoencoder with a latent diffusion U-Net enhanced by spatially-adaptive (SPADE) conditioning for precise anatomical structure reproduction.
  • Empirical results demonstrate significant improvements in segmentation metrics and anatomical coherence compared to baseline models such as Pix2Pix and SPADE-GAN.

SPADE-LDM is a 3D conditional generative framework for synthesizing late gadolinium-enhanced (LGE) cardiac MRI volumes from composite semantic masks that encode both anatomical labels and tissue clusters. It integrates spatially-adaptive (SPADE) conditioning with latent diffusion modeling (LDM) within a two-stage architecture, targeting high-fidelity, label-conditioned image synthesis to augment scarce medical imaging data, specifically for improving the segmentation of complex cardiac structures such as the left atrial wall and endocardium (Al-Sanaani et al., 8 Jan 2026).

1. Latent Diffusion Modeling in 3D Medical Image Synthesis

SPADE-LDM employs a two-phase latent diffusion process adapted for 3D volumetric medical images. The framework encodes a real MRI volume xx into a latent code z0=E(x)z_0 = E(x) using a pretrained variational autoencoder (VAE). The forward noising process in latent space generates a Markov chain {zt}t=0T\{z_t\}_{t=0}^T with

q(ztz0)=N(zt; αˉtz0, (1αˉt)I)q(z_t\,|\,z_0) = \mathcal{N}(z_t;\ \sqrt{\bar{\alpha}_t} z_0,\ (1-\bar{\alpha}_t) I)

where the cosine noise schedule governs αˉt=s=1tαs\bar{\alpha}_t = \prod_{s=1}^t \alpha_s. The reverse denoising process is performed by a 3D U-Net ϵθ\epsilon_\theta, which estimates the noise vector ϵ\epsilon added at each step, optimizing the standard DDPM loss under semantic mask conditioning cc:

Ldenoise(θ)=Ex,ϵN(0,I),t,c[ϵθ(zt,t,c)ϵ22]L_{\mathrm{denoise}}(\theta) = \mathbb{E}_{x,\,\epsilon \sim \mathcal{N}(0,I),\,t,\,c} \left[\left\| \epsilon_\theta(z_t, t, c) - \epsilon \right\|_2^2 \right]

Classifier-free guidance is implemented by randomly replacing cc with a null mask during 10% of training steps. During inference, the conditional and unconditional score estimates are linearly interpolated with a guidance weight of 1.5.

2. SPADE Conditioning for Semantic Mask Control

SPADE-LDM incorporates SPADE (Spatially-Adaptive Denormalization) conditioning at every residual block within the diffusion decoder. The semantic mask cc contains one-hot channels for anatomical labels (endo=1, wall=2) and unsupervised tissue clusters from intensity-based k-means (with K=2K=2 in the baseline configuration). Each SPADE normalization computes

yij=γ(c)ij(xijμ(x))σ(x)+β(c)ijy_{ij} = \gamma(c)_{ij} \cdot \frac{(x_{ij} - \mu(x))}{\sigma(x)} + \beta(c)_{ij}

where γ(c)\gamma(c) and β(c)\beta(c) are learned by two CNNs from the corresponding label channels at each spatial location. This enables spatially-adaptive modulation of decoder features, driving anatomical and textural alignment with the input masks. Empirically, conditioning on composite masks (endo, wall, plus clusters) yields more anatomically coherent context than using only sparse (endo+wall) labels.

3. Network Architecture

The framework consists of two sequential stages:

  • Stage 1: 3D Convolutional Autoencoder
    • Encoder: 4 residual down-sampling blocks (3×3×3 convolutions, GroupNorm, SiLU), each dividing resolution by 2 and doubling channels; final bottleneck has 8 channels.
    • Decoder: 4 symmetric up-sampling blocks with 3D convolutions, GroupNorm, SiLU, trilinear upsampling. Each block is preceded by a SPADE block conditioned on the (upsampled) semantic mask.
    • Discriminator: PatchGAN with R1R_1 gradient penalty (introduced after 10 epochs).
  • Stage 2: Latent Diffusion U-Net
    • Inputs: Noised latent ztR8×D×H×Wz_t \in \mathbb{R}^{8 \times D' \times H' \times W'}, 128-dimensional timestep embedding, and semantic mask cc downsampled to D×H×WD' \times H' \times W'.
    • U-Net: 4 downsampling/upsampling levels, SPADE-residual blocks, optional self-attention at bottleneck, skip connections; channel widths: 8, 16, 32, 64.
    • Output: 3×3×3 convolution projects features to ϵ^\hat{\epsilon}.

4. Training Protocol and Objectives

Training unfolds in two stages:

  • Autoencoder (Stage 1):

    • Reconstruction loss:

    LAE=λ1xDec(E(x),c)1+λ2LP(x,Dec(E(x),c))+λ3KL[N(μ,σ)N(0,I)]L_{\mathrm{AE}} = \lambda_1 \|x - \mathrm{Dec}(E(x), c)\|_1 + \lambda_2 \mathrm{LP}(x, \mathrm{Dec}(E(x), c)) + \lambda_3 \mathrm{KL}[\mathcal{N}(\mu, \sigma) \parallel \mathcal{N}(0,I)]

    where LP()\mathrm{LP}(\cdot) is a perceptual loss via a pretrained MedicalNet ResNet-50. - Adversarial loss:

    LGAN=Ex,c[logD(x,c)]+Ex,c[log(1D(Dec(E(x),c),c))]+R1 penaltyL_{\mathrm{GAN}} = \mathbb{E}_{x,c}[-\log D(x,c)] + \mathbb{E}_{x,c}[-\log(1 - D(\mathrm{Dec}(E(x), c), c))] + R_1 \text{ penalty}

  • Diffusion Model (Stage 2):

    • Denoising loss: see above.
    • Shape consistency loss:

    Lsc=Ex[(1Dice(S(Dec(z0)),s))+CE(S(Dec(z0)),s)]L_{\mathrm{sc}} = \mathbb{E}_x \left[(1 - \mathrm{Dice}(\mathcal{S}(\mathrm{Dec}(z_0)), s^*)) + \mathrm{CE}(\mathcal{S}(\mathrm{Dec}(z_0)), s^*)\right]

    with S\mathcal{S} a frozen 3D U-Net segmenter and ss^* ground-truth mask. - Total loss:

    L=Ldenoise+λscLscL = L_{\mathrm{denoise}} + \lambda_{\mathrm{sc}} L_{\mathrm{sc}}

Training is performed for 200 epochs in each stage (batch size 1), using Adam optimizers with distinct hyperparameters for each subcomponent. The bottleneck has 8 channels with spatial downsampling by a factor of 8. Data augmentation incorporates affine transformations, elastic deformations, intensity noise, gamma shifts, and simulated bias-field effects. All LGE MRI volumes and masks are resampled to 256×256×64256 \times 256 \times 64 at 1 mm isotropic resolution.

5. Synthetic Image Quality and Downstream Segmentation Performance

SPADE-LDM achieves state-of-the-art synthesis fidelity on LGE MRI benchmarks, demonstrably outperforming both Pix2Pix and SPADE-GAN baselines in FID, MMD, MS-SSIM, and PSNR metrics (see table below):

Model FID MMD MS-SSIM PSNR (dB)
Pix2Pix 40.821 36.890 0.763 23.067
SPADE-GAN 7.652 4.433 0.811 23.542
SPADE-LDM 4.063 2.656 0.826 24.792

The synthetic LGE volumes, when combined with real data, raise the Dice score for a 3D U-Net LA cavity segmentation from 0.908±0.01620.908 \pm 0.0162 (real only) to 0.936±0.01380.936 \pm 0.0138 (real + SPADE-LDM synthetic), a statistically significant improvement (p<0.05p < 0.05, one-tailed Wilcoxon). Qualitative evaluations highlight SPADE-LDM's capability for realistic wall thickness reproduction (2–3 mm), gadolinium texture, background preservation, and cross-slice anatomical continuity.

6. Ablation Experiments and Analysis

Ablation studies demonstrate that using composite semantic masks (endo, wall, and k-means clusters) produces more anatomically complete and coherent context than sparse masks (endo+wall alone), and removal of the shape-consistency loss LscL_{\mathrm{sc}} degrades wall fidelity (rise in FID <<2%) and reduces segmentation performance when used for augmentation. Classifier-free guidance with a weight of 1.5 improves both shape adherence and contrast relative to unconditional or fully conditional sampling. SPADE-based normalization is critical for precise structure preservation at decoder and diffusion scales, with group normalization alone yielding suboptimal results.

Model progression across Pix2Pix, SPADE-GAN, and SPADE-LDM reveals increasingly sophisticated capture of both global anatomy and local texture, with SPADE-LDM providing best fidelity in challenging regions such as the thin LA wall and surrounding myocardium.

7. Significance and Future Directions

SPADE-LDM demonstrates that 3D latent diffusion models conditioned with SPADE on composite anatomical masks can generate realistic, semantically controlled medical images, yielding significant gains as training data for segmentation models. The integration of multi-class semantic guidance, advanced loss composition (reconstruction, adversarial, denoising, and anatomical shape), and classifier-free guidance is instrumental for robust high-resolution synthesis. A plausible implication is that similar architectures could be adapted for other anatomical regions or modalities where annotated sample scarcity and morphological complexity limit conventional supervised learning.

The results indicate that SPADE-LDM provides an effective framework for improving the segmentation of under-represented cardiac structures and may serve as a blueprint for future developments in semantically-conditioned, data-efficient 3D generative imaging (Al-Sanaani et al., 8 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SPADE-LDM.