Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 10 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 139 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Beyond catastrophic forgetting in associative networks with self-interactions (2504.04560v1)

Published 6 Apr 2025 in cond-mat.dis-nn, cond-mat.stat-mech, and q-bio.NC

Abstract: Spin-glass models of associative memories are a cornerstone between statistical physics and theoretical neuroscience. In these networks, stochastic spin-like units interact through a synaptic matrix shaped by local Hebbian learning. In absence of self-interactions (i.e., autapses), the free energy reveals catastrophic forgetting of all stored patterns when their number exceeds a critical memory load. Here, we bridge the gap with biology by considering networks of deterministic, graded units coupled via the same Amari-Hopfield synaptic matrix, while retaining autapses. Contrary to the assumption that self-couplings play a negligible role, we demonstrate that they qualitatively reshape the energy landscape, confining the recurrent dynamics to the subspace hosting the stored patterns. This allows for the derivation of an exact overlap-dependent Lyapunov function, valid even for networks with finite size. Moreover, self-interactions generate an auxiliary internal field aligned with the target memory pattern, widening the repertoire of accessible attractor states. Consequently, pure recall states act as robust associative memories for any memory load, beyond the critical threshold for catastrophic forgetting observed in spin-glass models -- all without requiring nonlocal learning prescriptions or significant reshaping of the Hebbian synaptic matrix.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube