Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DiffFlow: A Unified SDE Framework for Score-Based Diffusion Models and Generative Adversarial Networks (2307.02159v1)

Published 5 Jul 2023 in stat.ML, cs.CV, cs.LG, and math.AP

Abstract: Generative models can be categorized into two types: explicit generative models that define explicit density forms and allow exact likelihood inference, such as score-based diffusion models (SDMs) and normalizing flows; implicit generative models that directly learn a transformation from the prior to the data distribution, such as generative adversarial nets (GANs). While these two types of models have shown great success, they suffer from respective limitations that hinder them from achieving fast sampling and high sample quality simultaneously. In this paper, we propose a unified theoretic framework for SDMs and GANs. We shown that: i) the learning dynamics of both SDMs and GANs can be described as a novel SDE named Discriminator Denoising Diffusion Flow (DiffFlow) where the drift can be determined by some weighted combinations of scores of the real data and the generated data; ii) By adjusting the relative weights between different score terms, we can obtain a smooth transition between SDMs and GANs while the marginal distribution of the SDE remains invariant to the change of the weights; iii) we prove the asymptotic optimality and maximal likelihood training scheme of the DiffFlow dynamics; iv) under our unified theoretic framework, we introduce several instantiations of the DiffFLow that provide new algorithms beyond GANs and SDMs with exact likelihood inference and have potential to achieve flexible trade-off between high sample quality and fast sampling speed.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Wasserstein generative adversarial networks. In International conference on machine learning, pages 214–223. PMLR.
  2. Diffusions hypercontractives. In Séminaire de Probabilités XIX 1983/84: Proceedings, pages 177–206. Springer.
  3. A note on talagrand’s transportation inequality and logarithmic sobolev inequality. Probability theory and related fields, 148:285–304.
  4. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794.
  5. Deep generative learning via variational gradient flow. In International Conference on Machine Learning, pages 2093–2101. PMLR.
  6. Generative adversarial networks. Communications of the ACM, 63(11):139–144.
  7. Uniform poincaré and logarithmic sobolev inequalities for mean field particle systems. The Annals of Applied Probability, 32(3):1590–1614.
  8. Improved training of wasserstein gans. Advances in neural information processing systems, 30.
  9. Efficient diffusion training via min-snr weighting strategy. arXiv preprint arXiv:2303.09556.
  10. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851.
  11. A variational perspective on diffusion-based generative models and score matching. Advances in Neural Information Processing Systems, 34:22863–22876.
  12. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4).
  13. Brownian motion and stochastic calculus, volume 113. Springer Science & Business Media.
  14. Elucidating the design space of diffusion-based generative models. arXiv preprint arXiv:2206.00364.
  15. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707.
  16. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  17. Ledoux, M. (2006). Concentration of measure and logarithmic sobolev inequalities. In Seminaire de probabilites XXXIII, pages 120–216. Springer.
  18. Sampling can be faster than optimization. Proceedings of the National Academy of Sciences, 116(42):20881–20885.
  19. f-gan: Training generative neural samplers using variational divergence minimization. Advances in neural information processing systems, 29.
  20. Oksendal, B. (2013). Stochastic differential equations: an introduction with applications. Springer Science & Business Media.
  21. Stochastic differential equations. Springer.
  22. Variational inference with normalizing flows. In International conference on machine learning, pages 1530–1538. PMLR.
  23. Improved techniques for training gans. Advances in neural information processing systems, 29.
  24. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR.
  25. Denoising diffusion implicit models. In International Conference on Learning Representations.
  26. Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34:1415–1428.
  27. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32.
  28. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448.
  29. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations.
  30. Monoflow: Rethinking divergence gans via the perspective of differential equations. arXiv preprint arXiv:2302.01075.
Citations (3)

Summary

We haven't generated a summary for this paper yet.