Convergent Weight and Activation Dynamics in Memristor Neural Networks (2507.20634v1)
Abstract: Convergence of dynamic feedback neural networks (NNs), as the Cohen-Grossberg, Hopfield and cellular NNs, has been for a long time a workhorse of NN theory. Indeed, convergence in the presence of multiple stable equilibrium points (EPs) is crucial to implement content addressable memories and solve several other signal processing tasks in real time. There are two typical ways to use a convergent NN, i.e.: a) let the activations evolve while maintaining fixed weights and inputs (activation dynamics) or b) adapt the weights while maintaining fixed activations (weight dynamics). As remarked in a seminal paper by Hirsch, there is another interesting possibility, i.e., let the neuron interconnection weights evolve while simultaneously running the activation dynamics (weight-activation dynamics). The weight-activation dynamics is of importance also because it is more plausible than the other two types for modeling neural systems. The paper breaks new ground by analyzing for the first time in a systematic way the convergence properties of the weight-activation dynamics for a class of memristor feedback dynamic NNs. The main result is that, under suitable assumptions on the structure of the memristor interconnections, the solutions (weights and activations) converge to an EP, except at most for a set of initial conditions with zero measure. The result includes the most important case where the NN has multiple stable EPs.