Relating trained RNN solutions to biological neural computations

Determine the relationship between recurrent neural networks trained to produce specified readout functions and the actual computations carried out by biological neural circuits, given that many distinct high-dimensional systems can implement the same low-dimensional readout behavior.

Background

The paper develops a data-driven spectral submanifold (SSM) approach to reduce high-dimensional recurrent neural networks (RNNs) to low-dimensional, interpretable dynamical models that capture core computations in tasks such as context-dependent decision-making, oscillation generation, and working memory. While these reductions clarify the internal phase-space structure of trained RNNs, it remains uncertain how such solutions map onto computations realized by biological neural populations.

The authors explicitly note that multiple high-dimensional parameterizations can yield the same low-dimensional readout in trained RNNs, raising a fundamental question about the correspondence between these engineered solutions and neural computations in vivo. This uncertainty motivates the need for methods to bridge model-derived dynamics and biological mechanisms.

References

The relationship between RNNs trained to perform specific readout functions and the actual computations carried out by neurons is unclear, as many solutions are available for a high-dimensional system to produce a low-dimensional readout.

Data-Driven Reduced Modeling of Recurrent Neural Networks (2510.13519 - Marraffa et al., 15 Oct 2025) in Discussion