Papers
Topics
Authors
Recent
2000 character limit reached

One-Step Early Stopping Strategy using Neural Tangent Kernel Theory and Rademacher Complexity

Published 27 Nov 2024 in cs.LG, cs.SY, and eess.SY | (2411.18806v1)

Abstract: The early stopping strategy consists in stopping the training process of a neural network (NN) on a set $S$ of input data before training error is minimal. The advantage is that the NN then retains good generalization properties, i.e. it gives good predictions on data outside $S$, and a good estimate of the statistical error (population loss'') is obtained. We give here an analytical estimation of the optimal stopping time involving basically the initial training error vector and the eigenvalues of theneural tangent kernel''. This yields an upper bound on the population loss which is well-suited to the underparameterized context (where the number of parameters is moderate compared with the number of data). Our method is illustrated on the example of an NN simulating the MPC control of a Van der Pol oscillator.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.