Papers
Topics
Authors
Recent
2000 character limit reached

KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss

Published 22 Feb 2021 in math.DS, cs.LG, math-ph, and math.MP | (2102.11923v2)

Abstract: Many physical phenomena are described by Hamiltonian mechanics using an energy function (the Hamiltonian). Recently, the Hamiltonian neural network, which approximates the Hamiltonian as a neural network, and its extensions have attracted much attention. This is a very powerful method, but its use in theoretical studies remains limited. In this study, by combining the statistical learning theory and Kolmogorov-Arnold-Moser (KAM) theory, we provide a theoretical analysis of the behavior of Hamiltonian neural networks when the learning error is not completely zero. A Hamiltonian neural network with non-zero errors can be considered as a perturbation from the true dynamics, and the perturbation theory of the Hamilton equation is widely known as the KAM theory. To apply the KAM theory, we provide a generalization error bound for Hamiltonian neural networks by deriving an estimate of the covering number of the gradient of the multi-layer perceptron, which is the key ingredient of the model. This error bound gives an $L\infty$ bound on the Hamiltonian that is required in the application of the KAM theory.

Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.