Fixed Integral Neural Networks
Abstract: It is often useful to perform integration over learned functions represented by neural networks. However, this integration is usually performed numerically, as analytical integration over learned functions (especially neural networks) is generally viewed as intractable. In this work, we present a method for representing the analytical integral of a learned function $f$. This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised by applying constraints directly to the integral. Crucially, we also introduce a method to constrain $f$ to be positive, a necessary condition for many applications (e.g. probability distributions, distance metrics, etc). Finally, we introduce several applications where our fixed-integral neural network (FINN) can be utilised.
- Unconstrained Monotonic Neural Networks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- Weight Uncertainty in Neural Networks, May 2015. arXiv:1505.05424 [cs, stat].
- Neural Ordinary Differential Equations, December 2019. arXiv:1806.07366 [cs, stat].
- Normalizing Flows: An Introduction and Review of Current Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):3964–3979, November 2021. arXiv:1908.09257 [cs, stat].
- Reinforcement Learning with Deep Energy-Based Policies, July 2017. arXiv:1702.08165 [cs].
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.