Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multisymplectic Formulation of Deep Learning Using Mean--Field Type Control and Nonlinear Stability of Training Algorithm (2207.12242v1)

Published 7 Jul 2022 in cs.LG and math.DS

Abstract: As it stands, a robust mathematical framework to analyse and study various topics in deep learning is yet to come to the fore. Nonetheless, viewing deep learning as a dynamical system allows the use of established theories to investigate the behaviour of deep neural networks. In order to study the stability of the training process, in this article, we formulate training of deep neural networks as a hydrodynamics system, which has a multisymplectic structure. For that, the deep neural network is modelled using a stochastic differential equation and, thereby, mean-field type control is used to train it. The necessary conditions for optimality of the mean--field type control reduce to a system of Euler-Poincare equations, which has the a similar geometric structure to that of compressible fluids. The mean-field type control is solved numerically using a multisymplectic numerical scheme that takes advantage of the underlying geometry. Moreover, the numerical scheme, yields an approximated solution which is also an exact solution of a hydrodynamics system with a multisymplectic structure and it can be analysed using backward error analysis. Further, nonlinear stability yields the condition for selecting the number of hidden layers and the number of nodes per layer, that makes the training stable while approximating the solution of a residual neural network with a number of hidden layers approaching infinity.

Summary

We haven't generated a summary for this paper yet.