Dice Question Streamline Icon: https://streamlinehq.com

Transferability of ESN statistical prediction across parameter regimes

Establish whether an echo state network trained on trajectories from a flow Φ_γ1 on a metric space M (where for each parameter γ the flow Φ_γ admits an attractor supporting an ergodic invariant measure) can, after applying transfer learning with only a small additional training dataset from a different parameter value γ2, successfully predict the statistical properties of the observable process (f ∘ Φ_γ2^t)_{t≥0} for a given observable f: M → R^n.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper studies learning and predicting statistical properties of parameterized dynamical systems using echo state networks (ESNs). For flows Φγ on a metric space M that admit attractors with ergodic invariant measures μγ, the goal is to predict statistical properties of observable processes (f ∘ Φγt){t≥0}.

Training an ESN at one parameter value (γ1) generally does not guarantee predictive power at a different parameter (γ2) because invariant measures μ_γ may vary sensitively with parameters. The authors propose using transfer learning (TL) to adapt a trained ESN from γ1 to γ2 with only a small additional dataset from the γ2-regime.

They numerically demonstrate success of this approach for the generalized Kuramoto–Sivashinsky equation across changes in domain size L and dispersion parameter γ, but they present the broader claim as a conjecture, leaving theoretical validation open.

References

We conjecture that the predictive capability of the echo state network can be `transferred' from the $\gamma_{1}$-system to the $\gamma_{2}$-system using transfer learning. That is, the echo state network will successfully predict statistical properties of $(f \circ \Phi_{\gamma_{2}{t})_{t \geqslant 0}$ after we update its training using a small new dataset from the new regime.