2000 character limit reached
Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability (1912.10402v1)
Published 22 Dec 2019 in cs.LG, cs.SY, eess.SY, math.OC, and stat.ML
Abstract: Stability of recurrent models is closely linked with trainability, generalizability and in some applications, safety. Methods that train stable recurrent neural networks, however, do so at a significant cost to expressibility. We propose an implicit model structure that allows for a convex parametrization of stable models using contraction analysis of non-linear systems. Using these stability conditions we propose a new approach to model initialization and then provide a number of empirical results comparing the performance of our proposed model set to previous stable RNNs and vanilla RNNs. By carefully controlling stability in the model, we observe a significant increase in the speed of training and model performance.