2000 character limit reached
Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering (2111.02673v3)
Published 4 Nov 2021 in cs.LG, cs.SY, eess.SY, and math.OC
Abstract: This paper investigates the use of extended Kalman filtering to train recurrent neural networks with rather general convex loss functions and regularization terms on the network parameters, including $\ell_1$-regularization. We show that the learning method is competitive with respect to stochastic gradient descent in a nonlinear system identification benchmark and in training a linear system with binary outputs. We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free closed-loop tracking.