Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Much is Enough? A Study on Diffusion Times in Score-based Generative Models (2206.05173v1)

Published 10 Jun 2022 in stat.ML and cs.LG

Abstract: Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, an analytical understanding of the role of the diffusion time T is still lacking. Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution; however, a smaller value of T should be preferred for a better approximation of the score-matching objective and higher computational efficiency. Starting from a variational interpretation of diffusion models, in this work we quantify this trade-off, and suggest a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times. Indeed, we show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process. Empirical results support our analysis; for image data, our method is competitive w.r.t. the state-of-the-art, according to standard sample quality metrics and log-likelihood.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Giulio Franzese (18 papers)
  2. Simone Rossi (21 papers)
  3. Lixuan Yang (6 papers)
  4. Alessandro Finamore (19 papers)
  5. Dario Rossi (42 papers)
  6. Maurizio Filippone (58 papers)
  7. Pietro Michiardi (58 papers)
Citations (41)

Summary

We haven't generated a summary for this paper yet.