Remembrance of Tasks Past in Tunable Physical Networks
Abstract: Sequential learning in physical networks is hindered by catastrophic forgetting, where training a new task erases solutions to earlier ones. We show that we can significantly enhance memory of previous tasks by introducing a hard threshold in the learning rule, allowing only edges with sufficiently large training signals to be altered. Thresholding confines tuning to the spatial vicinity of inputs and outputs for each task, effectively partitioning the network into weakly overlapping functional regions. Using simulations of tunable resistor networks, we demonstrate that this strategy enables robust memory of multiple sequential tasks while reducing the number of edges and the overall tuning cost. Our results hint at constrained training as a simple, local, and scalable mechanism to overcome catastrophic forgetting in tunable matter.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.