LB-NGRC for Dynamical Forecasting
- LB-NGRC is a machine learning framework that partitions phase space into overlapping regions and fits localized low-degree polynomial predictors.
- It employs smooth RBF blending to combine local models, ensuring continuous, interpretable predictions across complex, chaotic systems.
- Demonstrated on the chaotic Ikeda map, LB-NGRC achieves longer forecast horizons and superior long-term invariant measure reproduction compared to global approaches.
Locality-blended Next-Generation Reservoir Computing (LB-NGRC) is a machine learning architecture that extends the standard next-generation reservoir computing (NGRC) framework to improve the forecasting of complex dynamical systems, particularly those with highly nonlinear dynamics and challenging phase-space geometries. LB-NGRC achieves state-of-the-art forecasting and long-term statistical accuracy by partitioning phase space into overlapping regions, fitting localized low-degree polynomial predictors, and blending these predictors smoothly using radial-basis-function (RBF) weighting. This approach enables enhanced interpretability and better performance on smaller datasets, as demonstrated on the chaotic Ikeda map (Gauthier et al., 30 Mar 2025).
1. Foundation: Next-Generation Reservoir Computing (NGRC)
NGRC is designed for forecasting discrete-time dynamical systems from time series: The objective is to construct a linear-in-parameters approximation: where is a feature vector formed from monomials of the input, and is a trainable matrix learned by ridge regression: Standard polynomial NGRCs select a total degree and use all monomials up to degree , enabling direct analytic solution for the regression parameters. In deployment, model outputs recursively generate the next-step inputs.
2. Phase Space Partitioning via Ball-Tree Hierarchy
LB-NGRC diverges from the global NGRC model by partitioning the attractor's phase space into overlapping regions, denoted as balls , using a hierarchical ball-tree clustering algorithm. At each recursion level, parent balls are split to maximize center separation and minimize radius, resulting in regions after 0 levels. The partitioning employs the Minkowski 1-norm (typically 2), and although each training sample resides in a unique ball, the balls themselves overlap in phase space.
This hierarchical organization adapts model complexity to the underlying geometry of the phase space, enabling the use of simpler models within each localized region.
3. Local Polynomial Predictors and Training Protocol
Within each ball 3 (center 4, radius 5), inputs are translated to local coordinates, 6. Local feature vectors 7 are constructed from monomials in these shifted coordinates up to degree 8. For 9: 0 Each region is trained independently via ridge regression using only the data falling within that ball. The resulting local polynomial predictor, 1, is most accurate near 2.
Training steps are as follows:
- Construct the ball tree up to depth 3.
- Extract local training sets for each ball, shift coordinates, generate feature/response matrices, train by ridge regression.
- Select hyperparameters 4 by cross-validation to minimize normalized root-mean-square error (NRMSE).
4. Smooth Blending and RBF Attention
To create a globally valid predictor, LB-NGRC blends the 5 local predictors via convex combination: 6 with local RBF weights: 7 This mechanism ensures smooth transitions and avoids prediction discontinuities, as the influence of each local model decays smoothly with distance from its center.
The interpretation of 8 naturally yields an attention map over phase space, revealing the regions that most influence forecasts at each point.
5. Hyperparameters and Implementation Considerations
Key hyperparameters include:
- 9: degree of local polynomial (0 or 1 typically suffices)
- 2: ball-tree depth, yielding 3 balls (e.g., 4, 5)
- 6: ridge regularization parameter (grid search yields 7 for quadratic, 8 for cubic cases)
- 9: width-to-radius ratio for RBF blending (0 for quadratic, 1 for cubic)
- 2: number of training points (e.g., 3 for the Ikeda attractor)
For the Ikeda map, no time-delay embedding is needed, as the map is trained directly on 4.
6. Empirical Performance: Prediction and Climate Accuracy
Performance is evaluated by forecasting error: 5 where 6 is the mean of the attractor.
- Standard NGRC (global polynomial) yields NRMSE 7 almost instantly, with a forecast horizon much less than one Lyapunov time.
- LB-NGRC with quadratic locals (8, 9) achieves forecast horizons 0 Lyapunov times with NRMSE 1.
- LB-NGRC with cubic locals (2, 3) matches this horizon and better reproduces the long-term invariant measure ("climate") of the system, evidenced by accurate attractor statistics over 4 Lyapunov times.
The LB-NGRC approach is highly effective at forecasting difficult, nonpolynomial systems such as the Ikeda map and outperforms global polynomial NGRC both in short-term and long-term accuracy.
7. Interpretability and Analytical Insights
LB-NGRC provides substantial interpretability advantages over global approaches. Each 5 is a low-degree polynomial in locally shifted coordinates, permitting:
- Examination of dominant monomial terms and coefficients specific to regions.
- Analysis of local Jacobians 6 to assess local contraction, expansion, or folding.
- Identification of regions requiring increased polynomial degree for adequate modeling.
The soft attention weights 7 further act as an attention mechanism, indicating the local models most responsible for predictions at each state—a property useful for adaptive data collection or experimental design.
LB-NGRC maintains the computational and conceptual simplicity of globally trained NGRCs while leveraging (i) phase-space localization, (ii) region-appropriate low-order polynomials, and (iii) smooth RBF blending for superior forecasting and interpretable modeling of complex dynamical systems (Gauthier et al., 30 Mar 2025).