Computing Lyapunov functions using deep neural networks
Abstract: We propose a deep neural network architecture and a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.