Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion (2407.01392v4)

Published 1 Jul 2024 in cs.LG, cs.CV, and cs.RO

Abstract: This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels. We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing's variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website: https://boyuan.space/diffusion-forcing

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Boyuan Chen (75 papers)
  2. Diego Marti Monso (1 paper)
  3. Yilun Du (113 papers)
  4. Max Simchowitz (59 papers)
  5. Russ Tedrake (91 papers)
  6. Vincent Sitzmann (38 papers)
Citations (22)

Summary

Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion

The paper "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion" introduces a novel sequence generative model termed Diffusion Forcing (DF). This new paradigm synergizes the core benefits of next-token prediction models with those of full-sequence diffusion models to deliver enhanced capabilities in sequence modeling. The authors deploy DF across various domains, highlighting improvements in video prediction, decision-making, imitation learning, and time series forecasting.

Overview

Diffusion Forcing conceptualizes a new approach wherein a diffusion model denoises a set of tokens, each subjected to independent noise levels. This concept bridges the gap between next-token prediction and full-sequence diffusion models. DF facilitates the sequential generation of tokens, allowing for flexibility in sequence length and robustness in long-horizon generations. The model also incorporates unique sampling and guiding schemes that leverage its variable-horizon and causal architecture.

Technical Contributions

Next-Token Prediction and Full-Sequence Diffusion

Next-token prediction models, commonly trained via teacher forcing, predict the immediate next token based on the ground truth history of previous tokens. These models enable variable-length sequence generation, support efficient tree search, and are apt for online feedback control. However, they lack mechanisms for guiding the sampling of sequences to minimize certain objectives and often become unstable on continuous data for long horizons.

Conversely, full-sequence diffusion models excel in generating continuous signals, guiding sampling towards desirable sequences, and planning in decision-making applications. These models typically require non-causal, unmasked architectures, constraining their flexibility in variable-length generation and subsequence generation.

Diffusion Forcing (DF)

DF addresses the limitations of both approaches. It trains using a novel scheme where each token features a randomly assigned, independent noise level, and the tokens are denoised according to independent schedules via a shared model. The method employs a causal architecture (Causal Diffusion Forcing, CDF) such that future tokens depend on past noisy tokens.

The training objective for DF optimizes a variational lower bound on the expected log-likelihoods of sequences drawn from the true joint distribution, supporting a rigorous theoretical foundation. Such a model achieves flexible horizon control, stochastic rollouts, robustification to noisy observations, and long-horizon generation stability. It operates efficiently over large sequences by introducing slight noise during causal rollouts, thereby maintaining stability without compounding errors over time.

Empirical Results

The empirical validation spans several high-impact domains:

  1. Video Prediction: DF stabilizes long-horizon generation markedly better than next-token diffusion and full-sequence diffusion. It generates temporally consistent sequences without divergence past the training horizon. The superior performance is attributed to noising stabilization.
  2. Sequential Decision Making and Planning: Benchmarking on D4RL's maze environments, DF demonstrates higher rewards than state-of-the-art offline RL methods and Diffuser. The incorporation of Monte Carlo Tree Guidance (MCTG) and a "zig-zag" sampling schedule allows DF to outperform non-causal full-sequence diffusion in generating high-reward trajectories.
  3. Robust Imitation Learning: DF's capability to integrate memory significantly improves long-horizon tasks such as robotic manipulation, outperforming diffusion policies lacking memory. Furthermore, it demonstrates robustness to visually noisy or occluded observations.
  4. Time Series Forecasting: DF shows competitive performance in standard benchmarks, validating that the new paradigm does not trade-off general performance for specialized capabilities.

Practical and Theoretical Implications

The introduction of DF presents substantial practical benefits for various sequential modeling tasks. Practically, DF enables robust, flexible-horizon sequence modeling applicable to video generation, decision-making, robotic control, and other domains requiring stable long-term predictions. Theoretically, it extends the formulation of sequence models by integrating noise-based stochasticity effectively, fostering new explorations into noise-robust sequence models.

Future Directions

While the current work focuses on RNN-based causal architectures, scaling DF with transformers or other advanced architectures represents a promising avenue. Further, exploring modified noise schedules and guidance frameworks can enhance the denoising efficiency and quality of generation. Finally, applying DF to expansive and diverse datasets could lead to more generalized models suited for real-world applications.

In conclusion, Diffusion Forcing innovatively bridges next-token prediction with full-sequence diffusion, offering a robust, flexible, and theoretically grounded approach to sequence generative modeling. Its capabilities hold potential for significant advancements in AI-driven sequence manipulation and generation across various complex domains.

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com