Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Kernelised Normalising Flows (2307.14839v4)

Published 27 Jul 2023 in stat.ML and cs.LG

Abstract: Normalising Flows are non-parametric statistical models characterised by their dual capabilities of density estimation and generation. This duality requires an inherently invertible architecture. However, the requirement of invertibility imposes constraints on their expressiveness, necessitating a large number of parameters and innovative architectural designs to achieve good results. Whilst flow-based models predominantly rely on neural-network-based transformations for expressive designs, alternative transformation methods have received limited attention. In this work, we present Ferumal flow, a novel kernelised normalising flow paradigm that integrates kernels into the framework. Our results demonstrate that a kernelised flow can yield competitive or superior results compared to neural network-based flows whilst maintaining parameter efficiency. Kernelised flows excel especially in the low-data regime, enabling flexible non-parametric density estimation in applications with sparse data availability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Invertible residual networks, 2019.
  2. S. Chen and R. Gopinath. Gaussianization. Advances in neural information processing systems, 13, 2000.
  3. B. Dai and U. Seljak. Sliced iterative generator. CoRR, abs/2007.00674, 2020a. URL https://arxiv.org/abs/2007.00674.
  4. B. Dai and U. Seljak. Sliced iterative normalizing flows. In International Conference on Machine Learning, 2020b.
  5. Deep gaussian processes, 2013.
  6. Density estimation using real nvp, 2017.
  7. D. Dua and C. Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
  8. Neural spline flows, 2019.
  9. Made: Masked autoencoder for distribution estimation, 2015.
  10. Ffjord: Free-form continuous dynamics for scalable reversible generative models, 2018.
  11. Normalizing flows for knockoff-free controlled feature selection. Advances in Neural Information Processing Systems, 35:16125–16137, 2022.
  12. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pages 2722–2730. PMLR, 2019.
  13. Forward operator estimation in generative models with kernel transfer operators, 2021.
  14. A style-based generator architecture for generative adversarial networks, 2019.
  15. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  16. D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions, 2018.
  17. D. P. Kingma and M. Welling. Auto-encoding variational bayes, 2022.
  18. Training normalizing flows from dependent data. arXiv preprint arXiv:2209.14933, 2022.
  19. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):3964–3979, nov 2021. doi: 10.1109/tpami.2020.2992934. URL https://doi.org/10.1109%2Ftpami.2020.2992934.
  20. A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
  21. Iterative gaussianization: From ICA to random rotations. IEEE Transactions on Neural Networks, 22(4):537–549, apr 2011. doi: 10.1109/tnn.2011.2106511. URL https://doi.org/10.1109%2Ftnn.2011.2106511.
  22. Nanoflow: Scalable normalizing flows with sublinear parameter complexity, 2020.
  23. Transforming gaussian processes with normalizing flows, 2021.
  24. Gaussianization flows, 2020.
  25. ButterflyFlow: Building invertible layers with butterfly matrices. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15360–15375. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/meng22a.html.
  26. Masked autoregressive flow for density estimation, 2018.
  27. Normalizing flows for probabilistic modeling and inference, 2021.
  28. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  29. J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. 1998.
  30. J. Quiñonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6(65):1939–1959, 2005. URL http://jmlr.org/papers/v6/quinonero-candela05a.html.
  31. A generalized representer theorem. In Computational Learning Theory: 14th Annual Conference on Computational Learning Theory, COLT 2001 and 5th European Conference on Computational Learning Theory, EuroCOLT 2001 Amsterdam, The Netherlands, July 16–19, 2001 Proceedings 14, pages 416–426. Springer, 2001.
  32. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
  33. Large scale multiple kernel learning. The Journal of Machine Learning Research, 7:1531–1565, 2006.
  34. Generative flows with invertible attentions, 2022.
  35. Graph kernels. Journal of Machine Learning Research, 11:1201–1242, 2010.
  36. Learning deep kernels for exponential family densities. In International Conference on Machine Learning, pages 6737–6746. PMLR, 2019.
  37. Deep kernel learning. In Artificial intelligence and statistics, pages 370–378. PMLR, 2016.

Summary

We haven't generated a summary for this paper yet.