Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 43 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Latent Discrete Diffusion Models (LDDMs)

Updated 27 October 2025
  • LDDMs are a hybrid generative model that combine discrete masked diffusion with continuous latent diffusion, improving joint coherence and sample efficiency.
  • They employ techniques like variational inference, optimal transport, and particle-based gradient flows to stabilize training and boost generative fidelity.
  • Applications span language, image, biological sequences, graphs, and physics emulation, demonstrating their versatility and practical impact.

Latent Discrete Diffusion Models (LDDMs) constitute a class of generative architectures that combine discrete masked diffusion processes over categorical data (such as tokens, codebook indices, or categorical labels) with auxiliary continuous diffusion over latent embeddings. This hybridization reconciles the strengths and limitations of both modalities, aiming to improve joint generation quality, sample efficiency, and adaptability across language, image, biological sequence, and graph domains. LDDMs enhance the global coherence of samples generated with parallel discrete diffusion steps, enable richer latent reasoning, and facilitate advanced learning techniques rooted in variational inference, optimal transport, and hybrid training objectives.

1. Motivation and Theoretical Foundations

The introduction of LDDMs is driven by specific shortcomings of discrete masked denoising diffusion models, particularly when reverse transitions factorize independently across positions (e.g. tokens), resulting in diminished joint structure especially under few-step generation regimes (Shariatian et al., 20 Oct 2025). Discrete diffusion models typically suffer from hard commitments at every unmasking step, which can lead to incoherent outputs and difficulty resolving token-level ambiguities. Continuous diffusion models, in contrast, provide soft updates and allow uncertainty to be amortized over successive refinements, but may underperform in trainability or discrete decoding quality.

As established in theoretical work (Zhou et al., 3 Oct 2025), continuous diffusion models possess strictly greater expressivity than their discrete analogues; they can simulate the evolution of looped transformers and preserve richer semantic information along the generative trajectory. However, the challenge lies in decoding meaningful discrete outputs from continuous representations and stabilizing training. By directly coupling discrete diffusion with continuous channels—via either joint (FUJI-LDDMs) or sequential (SEQ-LDDMs) denoising—LDDMs leverage soft cross-token dependencies from latents to guide discrete sampling, achieving superior sample quality and more reliable joint structure.

2. Architectures and Algorithmic Frameworks

LDDMs are instantiated in multiple forms. A common structure begins with a trainable or frozen encoder EE mapping the input discrete sequence x0x_0 to a continuous latent y0y_0. The state z=(x,y)z = (x, y) is propagated via independent forward processes:

  • The discrete (token) channel uses a masked diffusion forward kernel, such as

qt(xtx0)=γtx0+(1γt)mq_t(x_t | x_0) = \gamma_t x_0 + (1-\gamma_t) m

where mm is the mask token and γt\gamma_t governs noise scheduling.

  • The continuous latent channel follows a Gaussian schedule

yt=αtyt1+σtεty_t = \alpha_t y_{t-1} + \sigma_t \varepsilon_t

where (αt,σt)(\alpha_t, \sigma_t) parameterize time-dependent scaling and can be set as in standard DDPMs.

Reverse transitions are parameterized as follows:

  • Joint (FUJI-LDDM): Both xx and yy are denoised in parallel, with the network output conditioned on the full (xt,yt,t)(x_t, y_t, t) state. Per-position categorical predictions for xx and vector predictions for yy are computed jointly.
  • Sequential (SEQ-LDDM): The continuous latent channel is first denoised through TYT_Y steps, yielding a globally coherent y0y_0; discrete channel denoising for TXT_X steps follows, guided by the resolved y0y_0.

Training objectives are constructed via ELBO-style variational losses:

L(θ,ϕ)=E[tλtxlogxθ(zt,t),x0+λtyyθ(zt,t)y02],L(\theta,\phi) = \mathbb{E} \left[ \sum_t \lambda_t^x \log \langle x_\theta(z_t,t), x_0 \rangle + \lambda_t^y \| y_\theta(z_t, t) - y_0 \|^2 \right],

where λtx,λty\lambda_t^x, \lambda_t^y control the weighting of reconstruction and latent consistency. Sampling can be performed with aggressive parallelism (many tokens revealed per step), with the latent channel acting as a soft prior to guide ambiguous decisions.

3. Advancements in Training and Sampling Efficiency

Key algorithmic innovations to enhance LDDM efficiency include:

  • Deterministic Loopholing Pathways: The Loopholing mechanism (Jo et al., 22 Oct 2025) introduces a deterministic latent pathway that carries a high-dimensional contextual embedding ht\mathbf{h}_t across steps, circumventing the sampling wall where categorical diffusion leads to loss of distributional richness. At every denoising stage, both a discrete sample and a deterministic latent state are propagated, allowing oscillation and idle step mitigation while enabling more coherent text generation and substantially reducing generative perplexity.
  • Self-Conditioning: Training employs efficient two-pass self-conditioning, where the initial latent context is set to zero, and in a subsequent pass, the model conditions on the previous latent output (with gradients stopped). This reduces computation while maintaining recurrent dependencies and information propagation through the latent pathway.
  • Gradient Flow and Interacting Particles: Particle-based gradient flows (Wang et al., 18 May 2025) reformulate LDDM training as minimization of a free energy functional using interacting particle approximations. Each empirical posterior qm(z0)q^m(z_0) over data points is represented by a cloud of particles, and parameters (θ,ϕ)(\theta, \phi) and particles are updated via Euler–Maruyama discretization, enabling distributed, scalable training with theoretical error guarantees.

4. Evaluation, Robustness, and Quantization

Evaluation of LDDMs employs metrics distinct from conventional per-token likelihoods to emphasize joint coherence and latent informativeness. For example:

  • Unconditional Generation: Lower generative perplexity and higher entropy matching the data distribution indicate improved sample diversity and quality (Shariatian et al., 20 Oct 2025, Jo et al., 22 Oct 2025).
  • Robustness: Feature-level adversarial attacks target internal modules (encoder, quantization, Unet blocks) via perturbations that maximize the l2l_2 distance in latent space under ll_\infty-norm constraints (Zhang et al., 2023). Defense mechanisms include geometry-based random resizing/padding and pre/post-processing (JPEG, Gaussian noise).
  • Quantization for Edge Deployment: Signal-to-Quantization-Noise Ratio (SQNR) guides both block-wise and module-wise hybrid quantization strategies (Yang et al., 2023), allowing LDDMs to be efficiently deployed on low-resource hardware.

5. Applications and Empirical Results

LDDMs have shown versatility across domains:

  • Language Generation and Reasoning: Hybrid planner–executor systems couple a DDLM “planner” with an autoregressive “executor,” achieving higher accuracy and efficiency via latent-space communication through learned projection of DDLM latents into ARM embeddings (Berrayana et al., 17 Oct 2025). In arithmetic and logic tasks, LDDMs with deterministic latent pathways outperform baseline discrete diffusion models.
  • Biological Sequence Generation: DiscDiff and RNAdiffusion architectures (Li et al., 2023, Huang et al., 15 Sep 2024) map discrete biological sequences into continuous latent spaces for synthetic DNA/RNA generation, employing novel metrics such as Fréchet Reconstruction Distance (FReD) and reward-guided latent optimization for properties like translation efficiency.
  • Image Synthesis: VMAE-based latent spaces (Lee et al., 14 Jul 2025) balance smoothness, perceptual compression, and reconstruction, improving both generation fidelity and computational efficiency. Hierarchical masked autoencoding ensures semantic preservation under high compression.
  • Graph and Sound Generation: Hyperbolic latent spaces (Fu et al., 6 May 2024) model anisotropic diffusion for graph generation, while low-rank LoRA adaptation and contrastive objectives enable efficient text-to-sound synthesis under constrained resources (Niu et al., 24 May 2024).
  • Physics Emulation: Diffusion in heavily compressed latent spaces maintains predictive accuracy of complex dynamical systems even under \sim1000×\times compression, enabling rapid and uncertainty-aware emulation (Rozet et al., 3 Jul 2025).

6. Future Directions and Open Challenges

Further research will extend LDDMs with:

  • Architectural Variants: Exploration of end-to-end learned, discrete, or graph-structured latent spaces beyond continuous vectors; improved joint and sequential denoising strategies coordinated with optimal noise schedules (Shariatian et al., 20 Oct 2025).
  • Hierarchical and Interpretable Latents: Forward-backward probing (Sclocchi et al., 17 Oct 2024) reveals phase transitions and correlated blockwise changes in data tied to latent hierarchy; integrating such diagnostics may improve interpretability and structure discovery.
  • Conditional and Controlled Generation: Controlled optimization for biological and multimodal tasks (e.g. reward-guided synthesis of RNA, attribute-preserving image anonymization via modular conditioning) remains an active area.
  • Scalable and Resource-Efficient Training: Advanced quantization, particle-based gradient flows, and efficient adaptation methods (e.g. LoRA) facilitate the deployment of LDDMs in real-world settings with strict computational or memory constraints.
  • Generalization of Reasoning and Planning: Joint continuous–discrete frameworks (e.g. CCDD) demonstrate enhanced latent reasoning and knowledge transfer capacities via embedding operations borrowed from large pretrained LLMs (Zhou et al., 3 Oct 2025).

LDDMs represent a significant paradigm in discrete generative modeling, bridging the practical efficiency of masked diffusion with the theoretical strengths of global latent reasoning and flexible architectural design. Progress in this area is anticipated to yield further advances in high-fidelity, interpretable, and resource-efficient generative systems for diverse structured data.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Latent Discrete Diffusion Models (LDDMs).