Memory/storage capacity of associative learning in SOMN architectures

Determine the maximum number of distinct states or input–output associations that can be learned and/or reliably stored in Self-Organising Memristive Network (SOMN) architectures operating under their native plasticity and feedback dynamics.

Background

The paper discusses how SOMNs perform associative learning via intrinsic plasticity and feedback without explicit programming, emulating metaplasticity and consolidation of short-term into long-term memory. These properties parallel biological networks and suggest the possibility of material-level cognitive functions.

The authors explicitly identify as open the quantification of how many associations or memory states can be stored in such physical architectures. Establishing this capacity is crucial for understanding scalability and reliability of SOMN-based associative learning.

References

A key open question is how many states can be learned and/or stored in these types of physical architectures?

Self-Organising Memristive Networks as Physical Learning Systems (2509.00747 - Caravelli et al., 31 Aug 2025) in Section 5.2 (Associative learning)