Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-conditioned Embedding Diffusion for Text Generation (2211.04236v1)

Published 8 Nov 2022 in cs.CL and cs.LG

Abstract: Can continuous diffusion models bring the same performance breakthrough on natural language they did for image generation? To circumvent the discrete nature of text data, we can simply project tokens in a continuous space of embeddings, as is standard in LLMing. We propose Self-conditioned Embedding Diffusion, a continuous diffusion mechanism that operates on token embeddings and allows to learn flexible and scalable diffusion models for both conditional and unconditional text generation. Through qualitative and quantitative evaluation, we show that our text diffusion models generate samples comparable with those produced by standard autoregressive LLMs - while being in theory more efficient on accelerator hardware at inference time. Our work paves the way for scaling up diffusion models for text, similarly to autoregressive models, and for improving performance with recent refinements to continuous diffusion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Robin Strudel (13 papers)
  2. Corentin Tallec (16 papers)
  3. Florent Altché (18 papers)
  4. Yilun Du (113 papers)
  5. Yaroslav Ganin (14 papers)
  6. Arthur Mensch (26 papers)
  7. Will Grathwohl (18 papers)
  8. Nikolay Savinov (16 papers)
  9. Sander Dieleman (29 papers)
  10. Laurent Sifre (21 papers)
  11. Rémi Leblond (10 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com