Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stochastic NG-RC: Next-Gen Reservoir Computing

Updated 4 April 2026
  • Stochastic NG-RC is a framework that extends next-generation reservoir computing to nonlinear, high-dimensional stochastic systems via controlled Itô SDEs.
  • It leverages a reservoir of delayed state features and noise inputs, trained using ridge regression for efficient one-step prediction and adaptive control.
  • Empirical evaluations demonstrate robust performance in multiscale dynamics, including applications in seizure suppression using real EEG data.

Stochastic next-generation reservoir computing (S-NG-RC) is a control and modeling framework that extends the next-generation reservoir computing (NG-RC) paradigm to nonlinear, high-dimensional stochastic dynamical systems. S-NG-RC integrates the computational efficiency of NG-RC with explicit stochastic analysis, enabling robust, event-triggered adaptive control and data-driven system identification for both simulated and real-world, multiscale processes with significant noise and uncertainty (Cheng et al., 14 May 2025).

1. Mathematical Structure and Stochastic Modeling

S-NG-RC is built upon controlled Itô stochastic differential equations (SDEs) of the general form:

dXt=[f(Xt)+u1(Xt)]dt+[g(Xt)+u2(Xt)]dWt,dX_t = [f(X_t) + u_1(X_t)]\,dt + [g(X_t) + u_2(X_t)]\,dW_t,

where XtRnX_t \in \mathbb{R}^n denotes the system state, f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n the drift, g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m} the diffusion, u1u_1 and u2u_2 state feedback controls, and WtW_t is an mm-dimensional Brownian motion. For much of the exposition, this simplifies to a constant-diffusion model with additive drift control:

dXt=[f(Xt)+u(Xt)]dt+σdWt,X0=x0,dX_t = [f(X_t) + u(X_t)]\,dt + \sigma\,dW_t, \qquad X_0 = x_0,

which is discretized by Euler–Maruyama:

Xi+1=Xi+[f(Xi)+ui]Δt+σΔtξi,X_{i+1} = X_i + [f(X_i) + u_i]\,\Delta t + \sigma \sqrt{\Delta t}\,\xi_i,

with XtRnX_t \in \mathbb{R}^n0 i.i.d.

The core computational unit is the feature (reservoir) vector at time XtRnX_t \in \mathbb{R}^n1:

XtRnX_t \in \mathbb{R}^n2

which aggregates current and delayed state, and selected nonlinear monomials (typically up to third order). Additional features include control input XtRnX_t \in \mathbb{R}^n3 and noise XtRnX_t \in \mathbb{R}^n4, where, for additive noise, XtRnX_t \in \mathbb{R}^n5, and for multiplicative noise, XtRnX_t \in \mathbb{R}^n6. The S-NG-RC one-step predictor is a linear readout:

XtRnX_t \in \mathbb{R}^n7

with XtRnX_t \in \mathbb{R}^n8 and XtRnX_t \in \mathbb{R}^n9 collecting the readout weights.

2. Training, Adaptive Control, and Stability Guarantees

Learning the reservoir readout proceeds via ridge regression, using f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n0 sample pairs:

f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n1

with closed-form solution:

f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n2

where f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n3 is the Tikhonov regularization parameter.

For adaptive control, with f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n4 as the desired next state, define the tracking error f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n5. The linear error-dynamics template

f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n6

(with f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n7 for spectral radius f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n8) prescribes exponential error decay. Solving for the feedback input yields:

f:RnRnf:\mathbb{R}^n \to \mathbb{R}^n9

The asymptotic stability of this control law is theoretically ensured using an extended stochastic LaSalle theorem: under existence of a Lyapunov function g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}0, radially unbounded and smooth, with the generator g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}1 of the controlled SDE satisfying g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}2, g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}3, and g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}4 continuous, and bounded g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}5-th moments of g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}6, it follows almost surely that

g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}7

with g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}8 identifying the zero-error invariant set (Cheng et al., 14 May 2025).

3. Algorithmic Implementation

The S-NG-RC workflow can be decomposed into the following stages:

  1. Data Preprocessing
    • Collect open-loop trajectories g:RnRn×mg:\mathbb{R}^n \to \mathbb{R}^{n \times m}9 using random probe inputs u1u_10; record corresponding noise u1u_11.
    • Assemble features u1u_12, u1u_13, u1u_14.
  2. Reservoir Initialization
    • Select polynomial orders/delays for monomials in u1u_15; set regularization u1u_16.
  3. Readout Training
    • Form sample matrices u1u_17 and u1u_18 as above.
    • Compute u1u_19.
  4. Closed-Loop Control
    • For u2u_20, observe u2u_21; construct u2u_22, estimate or sample u2u_23.
    • Compute u2u_24.
    • On event-trigger (u2u_25 threshold), update u2u_26 via the feedback law; otherwise, set u2u_27.
    • Apply u2u_28 to the true system and increment index.
  5. Control Iteration
    • Repeat the control loop until the time horizon is reached.

No backpropagation or online optimization is required; only a single ridge regression solve and linear controller updates at runtime.

4. Empirical Performance on Stochastic Van-der-Pol Dynamics

S-NG-RC demonstrates robust adaptive control on the multi-scale, noise-driven Van-der-Pol oscillator, described by:

u2u_29

Testing encompassed both additive and multiplicative noise, with WtW_t0, and WtW_t1.

  • Low noise (WtW_t2): 1–2 step convergence to target, RMSE WtW_t3.
  • High noise (WtW_t4): classical NG-RC diverges; S-NG-RC stable, RMSE = 0.3632.
  • Multiplicative noise (WtW_t5): RMSE = 0.2359, with persistent, controlled oscillations in the fast coordinate WtW_t6.

Robustness is summarized in the following RMSE heatmap (averaged over 5 runs):

WtW_t7 0.1 0.5 1.0 2.0
1.0 0.165 0.223 0.310 0.504
0.5 0.179 0.275 0.363 0.690
0.1 0.233 0.482 0.748 1.105

A plausible implication is that S-NG-RC achieves stability and error convergence across three temporal scales and a broad noise intensity range, outperforming traditional NG-RC in high-noise regimes.

5. Data-Driven Applications: Epileptic EEG Control

S-NG-RC has been deployed for closed-loop modulation of pathological dynamics reconstructed from real-world epileptic EEG recordings:

  • Governing-law identification:
    • Single EEG channel (WtW_t8), normalized and down-sampled (WtW_t9 s).
    • Drift (mm0) and diffusion (mm1) terms fitted from empirical data using a Kramers–Moyal expansion with a polynomial basis mm2, regularized by LASSO.
    • Sparse solution: mm3, mm4 with drift RMSE mm5 and diffusion RMSE mm6.
  • Seizure suppression control:
    • Target: transition seizure activity to resemble resting-state dynamics using the first 500 resting samples as reference and the next 500 seizure samples as control interval.
    • Perturbed training data generated via injected random mm7 in the learned SDE.
    • S-NG-RC (regularization mm8): one-step prediction RMSE = 0.1331 for perturbed data.
    • Closed-loop control over 100 seizure samples: RMSE = 0.0752. Kernel density estimation shows effective amplitude regulation, shifting network states toward the resting distribution.

6. Scalability, Robustness, and Current Limitations

S-NG-RC achieves high computational scalability through a single ridge regression solve for training and per-step linear updates, with per-step cost mm9 for dXt=[f(Xt)+u(Xt)]dt+σdWt,X0=x0,dX_t = [f(X_t) + u(X_t)]\,dt + \sigma\,dW_t, \qquad X_0 = x_0,0 hundreds of features. No iterative, gradient-based optimization is required.

Robustness arises from the explicit inclusion of noise features dXt=[f(Xt)+u(Xt)]dt+σdWt,X0=x0,dX_t = [f(X_t) + u(X_t)]\,dt + \sigma\,dW_t, \qquad X_0 = x_0,1 in the reservoir, facilitating the learning of state-noise interactions and maintaining stability with both additive and multiplicative noise across multiple time scales.

Current limitations include:

  • Governing-law errors: low-dimensional SDEs may not capture full network or non-Gaussian noise present in real data (e.g., EEG).
  • Model bias: the random perturbation design for control law training may introduce biases.
  • Error accumulation: long-term iteration may lead to compounding prediction errors.

Potential extensions encompass:

  • Enriching reservoir features with non-Gaussian (dXt=[f(Xt)+u(Xt)]dt+σdWt,X0=x0,dX_t = [f(X_t) + u(X_t)]\,dt + \sigma\,dW_t, \qquad X_0 = x_0,2-stable) noise models.
  • Automating basis-function selection with stochastic stability criteria.
  • Joint amplitude-frequency regulation using time-frequency embeddings.
  • Optimizing event-trigger design for neuro-modulation safety margins (Cheng et al., 14 May 2025).
Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stochastic NG-RC (S-NG-RC).