Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scenario optimization for optimal training of Echo State Networks (1912.01693v1)

Published 3 Dec 2019 in eess.SY and cs.SY

Abstract: Echo State Networks (ESNs) are widely-used Recurrent Neural Networks. They are dynamical systems including, in state-space form, a nonlinear state equation and a linear output transformation. The common procedure to train ESNs is to randomly select the parameters of the state equation, and then to estimate those of the output equation via a standard least squares problem. Such a procedure is repeated for different instances of the random parameters characterizing the state equation, until satisfactory results are achieved. However, this trial-and-error procedure is not systematic and does not provide any guarantee about the optimality of the identification results. To solve this problem, we propose to complement the identification procedure of ESNs by applying results in scenario optimization. The resulting training procedure is theoretically sound and allows one to link precisely the number of identification instances to a guaranteed optimality bound on relevant performance indexes, such as the Root Mean Square error and the FIT index of the estimated model evaluated over a validation data-set. The proposed procedure is finally applied to the simulated model of a pH neutralization process: the obtained results confirm the validity of the approach.

Citations (2)

Summary

We haven't generated a summary for this paper yet.