Papers
Topics
Authors
Recent
Search
2000 character limit reached

LB-NGRC for Dynamical Forecasting

Updated 4 April 2026
  • LB-NGRC is a machine learning framework that partitions phase space into overlapping regions and fits localized low-degree polynomial predictors.
  • It employs smooth RBF blending to combine local models, ensuring continuous, interpretable predictions across complex, chaotic systems.
  • Demonstrated on the chaotic Ikeda map, LB-NGRC achieves longer forecast horizons and superior long-term invariant measure reproduction compared to global approaches.

Locality-blended Next-Generation Reservoir Computing (LB-NGRC) is a machine learning architecture that extends the standard next-generation reservoir computing (NGRC) framework to improve the forecasting of complex dynamical systems, particularly those with highly nonlinear dynamics and challenging phase-space geometries. LB-NGRC achieves state-of-the-art forecasting and long-term statistical accuracy by partitioning phase space into overlapping regions, fitting localized low-degree polynomial predictors, and blending these predictors smoothly using radial-basis-function (RBF) weighting. This approach enables enhanced interpretability and better performance on smaller datasets, as demonstrated on the chaotic Ikeda map (Gauthier et al., 30 Mar 2025).

1. Foundation: Next-Generation Reservoir Computing (NGRC)

NGRC is designed for forecasting discrete-time dynamical systems from time series: xn+1=Φ(xn,xn−1,… ),xn∈Rd.\mathbf{x}_{n+1} = \Phi(\mathbf{x}_n, \mathbf{x}_{n-1}, \dots), \quad \mathbf{x}_n \in \mathbb{R}^d. The objective is to construct a linear-in-parameters approximation: xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W}, where On\mathcal{O}_n is a feature vector formed from monomials of the input, and W\mathbf{W} is a trainable matrix learned by ridge regression: W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}. Standard polynomial NGRCs select a total degree NN and use all monomials up to degree NN, enabling direct analytic solution for the regression parameters. In deployment, model outputs recursively generate the next-step inputs.

2. Phase Space Partitioning via Ball-Tree Hierarchy

LB-NGRC diverges from the global NGRC model by partitioning the attractor's phase space into BB overlapping regions, denoted as balls {Bb}\{\mathcal{B}_b\}, using a hierarchical ball-tree clustering algorithm. At each recursion level, parent balls are split to maximize center separation and minimize radius, resulting in B=2sB = 2^s regions after xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},0 levels. The partitioning employs the Minkowski xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},1-norm (typically xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},2), and although each training sample resides in a unique ball, the balls themselves overlap in phase space.

This hierarchical organization adapts model complexity to the underlying geometry of the phase space, enabling the use of simpler models within each localized region.

3. Local Polynomial Predictors and Training Protocol

Within each ball xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},3 (center xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},4, radius xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},5), inputs are translated to local coordinates, xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},6. Local feature vectors xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},7 are constructed from monomials in these shifted coordinates up to degree xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},8. For xn+1≈Φ^(xn,xn−1,...)=OnW,\mathbf{x}_{n+1} \approx \hat{\Phi}(\mathbf{x}_n, \mathbf{x}_{n-1}, ...) = \mathcal{O}_n \mathbf{W},9: On\mathcal{O}_n0 Each region is trained independently via ridge regression using only the data falling within that ball. The resulting local polynomial predictor, On\mathcal{O}_n1, is most accurate near On\mathcal{O}_n2.

Training steps are as follows:

  • Construct the ball tree up to depth On\mathcal{O}_n3.
  • Extract local training sets for each ball, shift coordinates, generate feature/response matrices, train by ridge regression.
  • Select hyperparameters On\mathcal{O}_n4 by cross-validation to minimize normalized root-mean-square error (NRMSE).

4. Smooth Blending and RBF Attention

To create a globally valid predictor, LB-NGRC blends the On\mathcal{O}_n5 local predictors via convex combination: On\mathcal{O}_n6 with local RBF weights: On\mathcal{O}_n7 This mechanism ensures smooth transitions and avoids prediction discontinuities, as the influence of each local model decays smoothly with distance from its center.

The interpretation of On\mathcal{O}_n8 naturally yields an attention map over phase space, revealing the regions that most influence forecasts at each point.

5. Hyperparameters and Implementation Considerations

Key hyperparameters include:

  • On\mathcal{O}_n9: degree of local polynomial (W\mathbf{W}0 or W\mathbf{W}1 typically suffices)
  • W\mathbf{W}2: ball-tree depth, yielding W\mathbf{W}3 balls (e.g., W\mathbf{W}4, W\mathbf{W}5)
  • W\mathbf{W}6: ridge regularization parameter (grid search yields W\mathbf{W}7 for quadratic, W\mathbf{W}8 for cubic cases)
  • W\mathbf{W}9: width-to-radius ratio for RBF blending (W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.0 for quadratic, W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.1 for cubic)
  • W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.2: number of training points (e.g., W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.3 for the Ikeda attractor)

For the Ikeda map, no time-delay embedding is needed, as the map is trained directly on W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.4.

6. Empirical Performance: Prediction and Climate Accuracy

Performance is evaluated by forecasting error: W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.5 where W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.6 is the mean of the attractor.

  • Standard NGRC (global polynomial) yields NRMSE W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.7 almost instantly, with a forecast horizon much less than one Lyapunov time.
  • LB-NGRC with quadratic locals (W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.8, W=(OTO+αI)−1OTY.\mathbf{W} = (\mathbf{O}^T \mathbf{O} + \alpha \mathbf{I})^{-1} \mathbf{O}^T \mathbf{Y}.9) achieves forecast horizons NN0 Lyapunov times with NRMSE NN1.
  • LB-NGRC with cubic locals (NN2, NN3) matches this horizon and better reproduces the long-term invariant measure ("climate") of the system, evidenced by accurate attractor statistics over NN4 Lyapunov times.

The LB-NGRC approach is highly effective at forecasting difficult, nonpolynomial systems such as the Ikeda map and outperforms global polynomial NGRC both in short-term and long-term accuracy.

7. Interpretability and Analytical Insights

LB-NGRC provides substantial interpretability advantages over global approaches. Each NN5 is a low-degree polynomial in locally shifted coordinates, permitting:

  • Examination of dominant monomial terms and coefficients specific to regions.
  • Analysis of local Jacobians NN6 to assess local contraction, expansion, or folding.
  • Identification of regions requiring increased polynomial degree for adequate modeling.

The soft attention weights NN7 further act as an attention mechanism, indicating the local models most responsible for predictions at each state—a property useful for adaptive data collection or experimental design.

LB-NGRC maintains the computational and conceptual simplicity of globally trained NGRCs while leveraging (i) phase-space localization, (ii) region-appropriate low-order polynomials, and (iii) smooth RBF blending for superior forecasting and interpretable modeling of complex dynamical systems (Gauthier et al., 30 Mar 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Locality-blended NG-RC (LB-NGRC).