Papers
Topics
Authors
Recent
Search
2000 character limit reached

Product Reservoir Computing

Updated 9 February 2026
  • Product reservoir computing is a variant of reservoir computing that employs multiplicative neurons to capture high-order nonlinear dynamics in time-series data.
  • It utilizes a logarithmic transformation to convert complex multiplicative dynamics into a linear framework, facilitating rigorous analysis and efficient short-term memory retention.
  • The architecture achieves competitive performance on chaotic benchmarks like Mackey–Glass and Lorenz while requiring careful input scaling to prevent state saturation.

Product reservoir computing is a variant of reservoir computing (RC) architecture in which the reservoir consists of multiplicative, or product, neurons rather than the standard additive neurons with nonlinear activations such as tanh\tanh. The approach draws direct inspiration from biological neurons whose response curves, in some cases, can be described by a product rather than a traditional sum-and-threshold mechanism. Product reservoir computing enables efficient and accurate time-series processing, especially for tasks requiring real-time computation of high-order time correlations, by leveraging the intrinsic nonlinearities of product units. This architecture preserves the core advantage of RC—requiring only the readout layer to be trained—while endowing the system with distinct mathematical properties and task-relevant performance characteristics (Goudarzi et al., 2015).

1. Product Reservoir Architecture and Mathematical Formulation

Product reservoir computing adopts a reservoir state vector x(t)RN\mathbf{x}(t) \in \mathbb{R}^N, driven by a scalar input u(t)R+u(t) \in \mathbb{R}^+ (scaled to (0,1](0,1] in empirical studies). The dynamics are defined by a recurrent weight matrix ΩRN×N\boldsymbol\Omega \in \mathbb{R}^{N\times N} and an input weight vector ωRN\boldsymbol\omega \in \mathbb{R}^N. The core distinction from standard RC is the update rule for each node, involving a direct product:

xi(t)=j=1N[xj(t1)]Ωij[u(t1)]ωi,i=1,,N.x_i(t) = \prod_{j=1}^N \left[ x_j(t-1) \right]^{\Omega_{ij}} \cdot \left[ u(t-1) \right]^{\omega_i}, \quad i=1,\ldots,N.

In vector form:

x(t)=exp(Ωlnx(t1)+ωlnu(t1)),\mathbf{x}(t) = \exp \left( \boldsymbol\Omega \, \ln \mathbf{x}(t-1) + \boldsymbol\omega \, \ln u(t-1) \right),

where ln\ln and exp\exp act elementwise. The readout is a conventional linear map: y(t)=Ψ[x(t);1],y(t) = \boldsymbol\Psi \left[ \mathbf{x}(t); 1 \right], with ΨR1×(N+1)\boldsymbol\Psi \in \mathbb{R}^{1 \times (N+1)} trained by ridge regression. The reservoir state is typically initialized with xi(0)=1x_i(0)=1 for all ii, ensuring lnxi(0)=0\ln x_i(0) = 0.

2. Memory Capacity and Nonlinear Capacity Analysis

RC architectures are characterized by both linear (short-term) and nonlinear (higher-order) memory capacities. Linear memory is quantified by

MCτ=Cov2(y(t),u(tτ))Var(y(t))Var(u(tτ)),MC_\tau = \frac{\mathrm{Cov}^2 \left(y(t), u(t-\tau)\right)} {\mathrm{Var}\left(y(t)\right) \mathrm{Var}\left(u(t-\tau)\right)},

with the total linear memory capacity given by MC=τ=1τmaxMCτMC = \sum_{\tau=1}^{\tau_{\max}} MC_\tau.

Nonlinear memory capacity is assessed through recovery of Legendre polynomials y^n,τ(t)\widehat y_{n,\tau}(t) of past inputs u(tτ)u(t-\tau), providing a measure of capacity for reconstructing higher-order statistics. The nnth-order nonlinear capacity is NMCn=τMCτ(y^n,τ)NMC_n = \sum_\tau MC_\tau(\widehat y_{n,\tau}).

Empirically, product RC exhibits a more rapid decay of MCτMC_\tau with delay compared to standard tanh\tanh-based echo state networks (ESN), indicating reduced long-term linear memory but strong short-term retention. For nonlinear capacity, product RCs typically exceed tanh\tanh-ESNs except at third order (n=3n=3), where traditional ESNs display higher "quality" at short delays; however, product RCs maintain nonlinear recall over longer timescales (Goudarzi et al., 2015).

3. Performance on Chaotic Time-Series Prediction Benchmarks

Performance is evaluated using widely adopted benchmarks: the Mackey–Glass (delay 17) time series and the three-dimensional Lorenz system. For both benchmarks, reservoirs of size N=500N=500 and optimal hyperparameters (ω=0.1\omega = 0.1, λ=ρ(Ω)=0.8\lambda = \rho(\boldsymbol\Omega) = 0.8) are employed.

For one-step prediction, product RC and nonlinear tanh\tanh-ESN both achieve NMSE 104\sim 10^{-4}, while the linear ESN performs significantly worse (NMSE 102\sim 10^{-2}10110^{-1}). In multi-step prediction scenarios, the error growth trajectories of product RC and tanh\tanh ESN are closely matched, even across prediction horizons of several dozen steps, demonstrating competitive dynamical forecasting capacity.

4. Mathematical Analysis and Echo-State Property

A critical feature of product reservoir computing is that the nonlinear product dynamics become linear in logarithmic space:

lnx(t)=Ωlnx(t1)+ωlnu(t1)\ln \mathbf{x}(t) = \boldsymbol\Omega \ln \mathbf{x}(t-1) + \boldsymbol\omega \ln u(t-1)

This formulation enables direct analysis via linear systems theory. The system admits a closed-form solution in terms of the initial state and input history: x(t)=exp(Ωtlnx(0)+k=0t1Ωtk1ωlnu(k))\mathbf{x}(t) = \exp \left( \boldsymbol\Omega^t \ln \mathbf{x}(0) + \sum_{k=0}^{t-1} \boldsymbol\Omega^{t-k-1} \boldsymbol\omega \ln u(k) \right)

The echo-state property is guaranteed if ρ(Ω)<1\rho(\boldsymbol\Omega)<1, in which case the influence of the initial state diminishes and the reservoir state becomes a function solely of recent inputs. These results facilitate rigorous spectral and state-space analysis of product reservoirs, with all nonlinearity restricted to the exponentiation/logarithm wrapping an intrinsically linear "kernel." This transparent structure contrasts with the more opaque dynamics of traditional sum-and-nonlinearity reservoirs (Goudarzi et al., 2015).

5. Implementation Considerations

Product reservoir networks are typically fully connected, with Ω\boldsymbol\Omega and ω\boldsymbol\omega initialized as i.i.d. N(0,1)\mathcal{N}(0,1) entries, rescaled to target spectral radius λ\lambda and input scaling ω\omega. Training proceeds in two phases: collecting reservoir state trajectories over 2000 input steps, then using ridge (pseudo-inverse) regression to fit the readout weights. Evaluation is performed on a fresh run, computing NMSE or memory capacities. Care must be taken to keep all states and inputs positive, to avoid complex values; this necessitates input scaling and restriction to the positive orthant for reservoir initialization and evolution.

6. Implications, Limitations, and Future Directions

Product reservoir computing matches or slightly exceeds tanh\tanh-based ESNs in nonlinear computational capacity across benchmarks and memory analyses. Its inherently linear-in-log description yields greater transparency in echo-state analysis and capacity quantification. Limitations include the requirement for positive-only inputs and a tendency for node states to "saturate" to zero under large exponents or weights close to unity. In practice, optimal operation is achieved with small input scaling and spectral radii below or close to 1.

Proposed directions for extension include the introduction of bias terms and multiplicative readouts to enhance expressivity, investigation of negative or complex value propagation (potentially relevant for phase-encoded signals), and application to tasks demanding explicit high-order correlation extraction. Product reservoir computing thus provides a mathematically tractable alternative to sum-and-tanh\tanh ESNs, opening new avenues for theoretical analysis and interpretation of biologically inspired computation (Goudarzi et al., 2015).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Product Reservoir Computing.