Dice Question Streamline Icon: https://streamlinehq.com

Energy-efficient recurrent networks with competitive performance on neuromorphic hardware

Determine whether recurrent neural network models—such as echo-state networks, liquid-state machines, and state-space models—can be implemented on neuromorphic systems with energy characteristics arising from scale-free connectivity (constant fan-in) or converging network activity (decreasing firing rates over time) while maintaining competitive task performance.

Information Square Streamline Icon: https://streamlinehq.com

Background

The analysis argues that dense feed-forward ANNs map poorly to neuromorphic hardware due to quadratic scaling in synapses and energy. In contrast, recurrent networks may be better suited because they enable reuse of components and can exploit convergence to reduce activity over time.

However, achieving the desired energy characteristics—either via scale-free connectivity (constant fan-in) or activity convergence—while retaining competitive performance has not been demonstrated, leaving a key uncertainty about the practicality of such recurrent designs on neuromorphic platforms.

References

To date it remains an open question whether such recurrent networks can be realized with these energy characteristics while providing competitive performance.

Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy Scaling (2507.17886 - Aimone, 23 Jul 2025) in Section 4.3 A less-promising example: dense linear algebra-based ANNs (final paragraph)