Learning and Unlearning: Bridging classification, memory and generative modeling in Recurrent Neural Networks (2410.05886v1)
Abstract: The human brain is a complex system that is fascinating scientists since a long time. Its remarkable capabilities include categorization of concepts, retrieval of memories and creative generation of new examples. At the same time, modern artificial neural networks are trained on large amounts of data to accomplish these same tasks with a considerable degree of precision. By contrast with biological systems, machines appear to be either significantly slow and energetically expensive to train, suggesting the need for a paradigmatic change in the way they learn. We here review a general learning prescription that allows to perform classification, memorization and generation of new examples in bio-inspired artificial neural networks. The training procedure can be split into a prior Hebbian learning phase and a subsequent anti-Hebbian one (usually referred to as Unlearning). The separation of training in two epochs allows the algorithm to go fully unsupervised while partially aligning with some modern biological theories of learning.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.