Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stable Diffusion Model

Updated 19 January 2026
  • Stable Diffusion Model is a latent text-to-image generative system that employs iterative denoising in a compressed latent space with text conditioning.
  • Its architecture integrates a VAE encoder, U-Net denoiser with cross-attention, and transformer-based text encoder to enhance semantic guidance.
  • Applications include dataset augmentation, semantic keypoint matching, artistic style transfer, and fairness improvement through debiasing techniques.

Stable Diffusion is a latent text-to-image generative model leveraging iterative denoising of latent variables conditioned on text. Its architecture combines a VAE encoder, a U-Net denoiser with cross-attention, and a transformer-based text encoder. Stable Diffusion models have become prominent for synthesizing photorealistic images and underpin recent advances in generative modeling, dataset augmentation, semantic matching, artistic style transfer, fairness enhancement, and hardware-efficient continuous-time priors.

1. Model Architecture and Diffusion Process

Stable Diffusion operates in a compressed latent space using a VAE encoder to map input images x0x_0 (typically of size 512×512512 \times 512 or 768×768768 \times 768) into lower-dimensional latent representations z0z_0 (Stöckl, 2022, Li et al., 2023). The generative process consists of a forward "noising" Markov chain:

q(zt∣zt−1)=N(zt;αt zt−1,(1−αt)I),q(z_t | z_{t-1}) = \mathcal{N}(z_t; \sqrt{\alpha_t} \, z_{t-1}, (1-\alpha_t)I),

where αt\alpha_t specifies the noise schedule for t=1,…,Tt=1,\dots,T steps.

The reverse denoising process is parameterized by a U-Net ϵθ(zt,t,c)\epsilon_\theta(z_t, t, c), which predicts noise ϵ\epsilon or reconstructs zt−1z_{t-1} from ztz_t conditioned on text embeddings cc from a transformer (commonly CLIP ViT-L/14). Cross-attention mechanisms inject conditioning at each U-Net block, crucial for semantic guidance. After TT steps, the cleaned latent z^0\hat{z}_0 is decoded by the VAE to the image domain.

The U-Net in recent versions (e.g., SD v2-1) comprises symmetrical encoder/decoder paths, residual blocks, group-norm, Swish activation, self-attention at intermediate resolutions (8 heads, D=1024D=1024), and cross-attention to text embeddings at every block (Li et al., 2023).

Sampling is often accelerated using PLMS or other numerical solvers, with classifier-free guidance scaling (>1>1) biasing output towards prompt features (Stöckl, 2022).

2. Training Objectives and Conditional Inputs

The standard learning objective for Stable Diffusion is denoising-score matching:

LSD=Ex0,ϵ∼N(0,I),t∥ϵ−ϵθ(zt,t,c)∥2,\mathcal{L}_{\rm SD} = \mathbb{E}_{x_0, \epsilon \sim \mathcal{N}(0,I), t} \bigl\|\epsilon - \epsilon_\theta(z_t, t, c)\bigr\|^2,

with zt=αtz0+1−αtϵz_t = \sqrt{\alpha_t} z_0 + \sqrt{1-\alpha_t} \epsilon. Conditioning information cc is provided via learned prompt embeddings and text input.

Recent research extends conditioning beyond text prompts. "SD4Match" introduces prompt-tuning (Li et al., 2023):

  • Prompt matrix θ∈RN×D\theta \in \mathbb{R}^{N \times D} is prepended to text tokens for cross-attention.
  • Prompts are initialized as Gaussian noise, generic embeddings, or zeros, and updated solely via gradient descent.
  • Conditional Prompting Module (CPM) in SD4Match fuses local features extracted from frozen DINOv2-ViT encoders of input image pairs and learns adaptive prompts θcond\theta_{\rm cond} via feature pooling and gating, concatenated with a global prompt to form θAB\theta^{AB}.

In style transfer with ControlNet (Gu et al., 2024), conditioning is extended to input control maps (e.g., Canny edge features), injected through zero-initialized 1x1 convolutions at all residual blocks of the trainable UNet copy, preserving original model semantics while isolating stylistic details.

3. Applications: Data Synthesis, Semantic Matching, and Art Style Transfer

Stable Diffusion supports large-scale generation of synthetic datasets for data augmentation. For example, 262,040 images were produced using WordNet glosses as prompts across 26,204 synsets (Stöckl, 2022). Each synset definition string (e.g., "a member of the genus Canis...") was used without engineered tokens or styles, with images generated at fixed resolution and standard hyperparameters.

In semantic keypoint matching, SD4Match leverages intermediate UNet feature maps for robust correspondence extraction. On the SPair-71k dataset, SD4Match using a learned universal prompt achieves a 19.7% gain over DIFT, with further improvements (+2.9%) using per-category prompts and matching state-of-the-art performance (75.5%) via fully conditional local prompting (Li et al., 2023). On PF-Pascal and PF-Willow, SD4Match-CPM reaches respective accuracies of 95.2% and 80.4%, outperforming baselines by 10–15 points.

For artistic style migration, Fine-tuned Stable Diffusion with ControlNet (FSDMC) extracts Jiehua painting style features by imposing Canny edges as control maps and artist-name prompts, leveraging classifier-free guidance dropout (50%) for semantic disentanglement (Gu et al., 2024). Quantitative evaluation demonstrates FSDMC yields FID=3.27 (versus CycleGAN's >12.8) and expert scores averaging 4.2/5, outperforming CycleGAN by roughly 2×.

4. Fairness and Debiasing in Stable Diffusion

The Debiasing Diffusion Model (DDM) advances unsupervised fairness by embedding an indicator network CϕC_\phi alongside the core U-Net (Huang et al., 16 Mar 2025). The DDM loss is a convex combination:

LDDM=(1−α)LSDM+αLind,\mathcal{L}_{\rm DDM} = (1-\alpha)\mathcal{L}_\mathrm{SDM} + \alpha\mathcal{L}_\mathrm{ind},

where Lind\mathcal{L}_\mathrm{ind} is cross-entropy on dataset-level target/non-target labels. DDM propagates debiasing gradients into the U-Net during training, stripping latent features of sensitive-attribute signals (gender, digit identity), but leaves inference unchanged. Experiments on artificially skewed CelebAMask-HQ (faces) and MNIST (digits) demonstrate up to a 70% reduction in fairness discrepancy (FD), with controlled fidelity–fairness trade-off managed via α\alpha.

DDM provides fairness without requiring granular sensitive labels, relying on

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stable Diffusion Model.