Stochastic NG-RC: Next-Gen Reservoir Computing
- Stochastic NG-RC is a framework that extends next-generation reservoir computing to nonlinear, high-dimensional stochastic systems via controlled Itô SDEs.
- It leverages a reservoir of delayed state features and noise inputs, trained using ridge regression for efficient one-step prediction and adaptive control.
- Empirical evaluations demonstrate robust performance in multiscale dynamics, including applications in seizure suppression using real EEG data.
Stochastic next-generation reservoir computing (S-NG-RC) is a control and modeling framework that extends the next-generation reservoir computing (NG-RC) paradigm to nonlinear, high-dimensional stochastic dynamical systems. S-NG-RC integrates the computational efficiency of NG-RC with explicit stochastic analysis, enabling robust, event-triggered adaptive control and data-driven system identification for both simulated and real-world, multiscale processes with significant noise and uncertainty (Cheng et al., 14 May 2025).
1. Mathematical Structure and Stochastic Modeling
S-NG-RC is built upon controlled Itô stochastic differential equations (SDEs) of the general form:
where denotes the system state, the drift, the diffusion, and state feedback controls, and is an -dimensional Brownian motion. For much of the exposition, this simplifies to a constant-diffusion model with additive drift control:
which is discretized by Euler–Maruyama:
with 0 i.i.d.
The core computational unit is the feature (reservoir) vector at time 1:
2
which aggregates current and delayed state, and selected nonlinear monomials (typically up to third order). Additional features include control input 3 and noise 4, where, for additive noise, 5, and for multiplicative noise, 6. The S-NG-RC one-step predictor is a linear readout:
7
with 8 and 9 collecting the readout weights.
2. Training, Adaptive Control, and Stability Guarantees
Learning the reservoir readout proceeds via ridge regression, using 0 sample pairs:
1
with closed-form solution:
2
where 3 is the Tikhonov regularization parameter.
For adaptive control, with 4 as the desired next state, define the tracking error 5. The linear error-dynamics template
6
(with 7 for spectral radius 8) prescribes exponential error decay. Solving for the feedback input yields:
9
The asymptotic stability of this control law is theoretically ensured using an extended stochastic LaSalle theorem: under existence of a Lyapunov function 0, radially unbounded and smooth, with the generator 1 of the controlled SDE satisfying 2, 3, and 4 continuous, and bounded 5-th moments of 6, it follows almost surely that
7
with 8 identifying the zero-error invariant set (Cheng et al., 14 May 2025).
3. Algorithmic Implementation
The S-NG-RC workflow can be decomposed into the following stages:
- Data Preprocessing
- Collect open-loop trajectories 9 using random probe inputs 0; record corresponding noise 1.
- Assemble features 2, 3, 4.
- Reservoir Initialization
- Select polynomial orders/delays for monomials in 5; set regularization 6.
- Readout Training
- Form sample matrices 7 and 8 as above.
- Compute 9.
- Closed-Loop Control
- For 0, observe 1; construct 2, estimate or sample 3.
- Compute 4.
- On event-trigger (5 threshold), update 6 via the feedback law; otherwise, set 7.
- Apply 8 to the true system and increment index.
- Control Iteration
- Repeat the control loop until the time horizon is reached.
No backpropagation or online optimization is required; only a single ridge regression solve and linear controller updates at runtime.
4. Empirical Performance on Stochastic Van-der-Pol Dynamics
S-NG-RC demonstrates robust adaptive control on the multi-scale, noise-driven Van-der-Pol oscillator, described by:
9
Testing encompassed both additive and multiplicative noise, with 0, and 1.
- Low noise (2): 1–2 step convergence to target, RMSE 3.
- High noise (4): classical NG-RC diverges; S-NG-RC stable, RMSE = 0.3632.
- Multiplicative noise (5): RMSE = 0.2359, with persistent, controlled oscillations in the fast coordinate 6.
Robustness is summarized in the following RMSE heatmap (averaged over 5 runs):
| 7 | 0.1 | 0.5 | 1.0 | 2.0 |
|---|---|---|---|---|
| 1.0 | 0.165 | 0.223 | 0.310 | 0.504 |
| 0.5 | 0.179 | 0.275 | 0.363 | 0.690 |
| 0.1 | 0.233 | 0.482 | 0.748 | 1.105 |
A plausible implication is that S-NG-RC achieves stability and error convergence across three temporal scales and a broad noise intensity range, outperforming traditional NG-RC in high-noise regimes.
5. Data-Driven Applications: Epileptic EEG Control
S-NG-RC has been deployed for closed-loop modulation of pathological dynamics reconstructed from real-world epileptic EEG recordings:
- Governing-law identification:
- Single EEG channel (8), normalized and down-sampled (9 s).
- Drift (0) and diffusion (1) terms fitted from empirical data using a Kramers–Moyal expansion with a polynomial basis 2, regularized by LASSO.
- Sparse solution: 3, 4 with drift RMSE 5 and diffusion RMSE 6.
- Seizure suppression control:
- Target: transition seizure activity to resemble resting-state dynamics using the first 500 resting samples as reference and the next 500 seizure samples as control interval.
- Perturbed training data generated via injected random 7 in the learned SDE.
- S-NG-RC (regularization 8): one-step prediction RMSE = 0.1331 for perturbed data.
- Closed-loop control over 100 seizure samples: RMSE = 0.0752. Kernel density estimation shows effective amplitude regulation, shifting network states toward the resting distribution.
6. Scalability, Robustness, and Current Limitations
S-NG-RC achieves high computational scalability through a single ridge regression solve for training and per-step linear updates, with per-step cost 9 for 0 hundreds of features. No iterative, gradient-based optimization is required.
Robustness arises from the explicit inclusion of noise features 1 in the reservoir, facilitating the learning of state-noise interactions and maintaining stability with both additive and multiplicative noise across multiple time scales.
Current limitations include:
- Governing-law errors: low-dimensional SDEs may not capture full network or non-Gaussian noise present in real data (e.g., EEG).
- Model bias: the random perturbation design for control law training may introduce biases.
- Error accumulation: long-term iteration may lead to compounding prediction errors.
Potential extensions encompass:
- Enriching reservoir features with non-Gaussian (2-stable) noise models.
- Automating basis-function selection with stochastic stability criteria.
- Joint amplitude-frequency regulation using time-frequency embeddings.
- Optimizing event-trigger design for neuro-modulation safety margins (Cheng et al., 14 May 2025).