Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
37 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
37 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
10 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Optimal Budgeted Rejection Sampling for Generative Models (2311.00460v2)

Published 1 Nov 2023 in cs.LG

Abstract: Rejection sampling methods have recently been proposed to improve the performance of discriminator-based generative models. However, these methods are only optimal under an unlimited sampling budget, and are usually applied to a generator trained independently of the rejection procedure. We first propose an Optimal Budgeted Rejection Sampling (OBRS) scheme that is provably optimal with respect to \textit{any} $f$-divergence between the true distribution and the post-rejection distribution, for a given sampling budget. Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance. Through experiments and supporting theory, we show that the proposed methods are effective in significantly improving the quality and diversity of the samples.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Refining Deep Generative Models via Discriminator Gradient Flow. arXiv:2012.00780 [cs, stat].
  2. Discriminator Rejection Sampling. arXiv:1810.06758 [cs, stat].
  3. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv:1809.11096 [cs, stat].
  4. Exploring Precision and Recall to assess the quality and diversity of LLMs. arXiv:2402.10693 [cs].
  5. Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling. arXiv:2003.06060 [cs, stat].
  6. Precision Recall Cover: A Method For Assessing Generative Models. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, pages 6571–6594. PMLR. ISSN: 2640-3498.
  7. Precision-Recall Curves Using Information Divergence Frontiers. arXiv:1905.10768 [cs, stat].
  8. Adversarially Learned Inference. arXiv:1606.00704 [cs, stat].
  9. Generative Adversarial Networks. In 27th Conference on Neural Information Processing Systems (NeurIPS 2014). arXiv: 1406.2661.
  10. Variational Rejection Sampling. arXiv:1804.01712 [cs, stat].
  11. Latent reweighting, an almost free improvement for GANs. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3574–3583, Waikoloa, HI, USA. IEEE.
  12. Elucidating the Design Space of Diffusion-Based Generative Models. arXiv:2206.00364 [cs, stat].
  13. Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models. In Proceedings of the 40 th International Conference on Machine Learning., volume 202, Honolulu, Hawaii, USA. JMLR. arXiv:2211.17091 [cs] version: 3.
  14. TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models. arXiv:2306.08013 [cs].
  15. Improved Precision and Recall Metric for Assessing Generative Models. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. arXiv: 1904.06991.
  16. Visualizing the Loss Landscape of Neural Nets. arXiv:1712.09913 [cs, stat].
  17. MacKay, D. J. C. (2005). Information Theory, Inference, and Learning Algorithms.
  18. Reliable Fidelity and Diversity Metrics for Generative Models. arXiv:2002.09797 [cs, stat].
  19. On surrogate loss functions and $f$-divergences. The Annals of Statistics, 37(2). arXiv:math/0510521.
  20. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv:1606.00709 [cs, stat].
  21. Assessing Generative Models via Precision and Recall. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. arXiv: 1806.00035.
  22. Revisiting precision recall definition for generative modeling. In Proceedings of the 36th International Conference on Machine Learning, pages 5799–5808. PMLR. ISSN: 2640-3498.
  23. Resampling Base Distributions of Normalizing Flows. arXiv:2110.15828 [cs, stat]. arXiv: 2110.15828.
  24. Tanaka, A. (2019). Discriminator optimal transport. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
  25. Metropolis-Hastings Generative Adversarial Networks. In Proceedings of the 36th International Conference on Machine Learning, pages 6345–6353. PMLR. ISSN: 2640-3498.
  26. Precision-Recall Divergence Optimization for Generative Modeling with GANs and Normalizing Flows. arXiv:2305.18910 [cs].
Citations (2)

Summary

We haven't generated a summary for this paper yet.