Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion Random Feature Model (2310.04417v2)

Published 6 Oct 2023 in stat.ML and cs.LG

Abstract: Diffusion probabilistic models have been successfully used to generate data from noise. However, most diffusion models are computationally expensive and difficult to interpret with a lack of theoretical justification. Random feature models on the other hand have gained popularity due to their interpretability but their application to complex machine learning tasks remains limited. In this work, we present a diffusion model-inspired deep random feature model that is interpretable and gives comparable numerical results to a fully connected neural network having the same number of trainable parameters. Specifically, we extend existing results for random features and derive generalization bounds between the distribution of sampled data and the true distribution using properties of score matching. We validate our findings by generating samples on the fashion MNIST dataset and instrumental audio data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Generative modeling with denoising auto-encoders and Langevin sampling. arXiv preprint arXiv:2002.00107, 2020.
  2. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. arXiv preprint arXiv:2209.11215, 2022.
  3. Hybrid random features. arXiv preprint arXiv:2110.04367, 2021.
  4. Rethinking attention with Performers. arXiv preprint arXiv:2009.14794, 2020.
  5. Density estimation using real NVP. In International Conference on Learning Representations, 2017.
  6. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  7. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pages 2722–2730. PMLR, 2019.
  8. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
  9. Autoregressive diffusion models. arXiv preprint arXiv:2110.02037, 2021.
  10. Gotta go fast when generating data with score-based models. arXiv preprint arXiv:2105.14080, 2021.
  11. Adversarial score matching and improved sampling for image generation. arXiv preprint arXiv:2009.05475, 2020.
  12. Deep semi-random features for nonlinear function approximation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  13. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021.
  14. D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
  15. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4):307–392, 2019.
  16. D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
  17. Fast non-markovian diffusion model for weakly supervised anomaly detection in brain mr images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 579–589. Springer, 2023.
  18. J. Menick and N. Kalchbrenner. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In International Conference on Learning Representations, 2019.
  19. The random feature model for input-output maps between banach spaces. SIAM Journal on Scientific Computing, 43(5):A3212–A3243, 2021.
  20. Variational autoencoder for deep learning of images, labels and captions. Advances in neural information processing systems, 29, 2016.
  21. A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in neural information processing systems, 20, 2007.
  22. A. Rahimi and B. Recht. Uniform approximation of functions with random bases. In 2008 46th annual allerton conference on communication, control, and computing, pages 555–561. IEEE, 2008.
  23. A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Advances in neural information processing systems, 21, 2008.
  24. SRMD: Sparse random mode decomposition. arXiv preprint arXiv:2204.06108, 2022.
  25. SPADE4: Sparsity and delay embedding based forecasting of epidemics. Bulletin of Mathematical Biology, 85(8):71, 2023.
  26. HARFE: Hard-ridge random feature expansion. arXiv preprint arXiv:2202.02877, 2022.
  27. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR, 2015.
  28. Y. Song and S. Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  29. Y. Song and S. Ermon. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448, 2020.
  30. P. Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011.
  31. Variational autoencoder for semi-supervised text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
  32. Diffusion models: A comprehensive survey of methods and applications. arXiv preprint arXiv:2209.00796, 2022.
  33. Q. Zhang and Y. Chen. Diffusion normalizing flow. Advances in Neural Information Processing Systems, 34:16280–16291, 2021.

Summary

We haven't generated a summary for this paper yet.