Papers
Topics
Authors
Recent
2000 character limit reached

Feedback promotes efficient-coding while attenuating bias in recurrent neural networks

Published 27 Sep 2025 in q-bio.NC | (2509.23104v1)

Abstract: Studies of human decision-making demonstrate that environmental regularities, such as natural image statistics or intentionally nonuniform stimulus probabilities, can be exploited to improve efficiency (termed `efficient-coding'). Conversely, from a machine learning perspective, such nonuniform stimulus properties can lead to biased neural networks with poor generalization performance. Understanding how the brain flexibly leverages stimulus bias while maintaining robust generalization could lead to novel architectures that adaptively exploit environmental structure without sacrificing performance on out-of-distribution data. To address this disconnect, we investigated the impact of stimulus regularities in a 3-layer hierarchical continuous-time recurrent neural network (ctRNN) to better understand how artificial networks might exploit statistical regularities to improve efficiency while avoiding undesirable biases. We trained the model to reproduce one of six possible inputs under biased conditions (stimulus 1 more probable than stimuli 2-6) or unbiased conditions (all stimuli equally likely). Across all hidden layers, more information was encoded about high-probability stimuli, consistent with the efficient-coding framework. Importantly, reducing feedback from the final hidden layer of trained models selectively magnified representations of high-probability stimuli, at the expense of low-probability stimuli, across all layers. Together, these results suggest that models exploit nonuniform input statistics to improve efficiency, and that feedback pathways evolve to protect the processing of low-probability stimuli by regulating the impact of biased input statistics.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.