CLASSP: a Biologically-Inspired Approach to Continual Learning through Adjustment Suppression and Sparsity Promotion (2405.09637v2)
Abstract: This paper introduces a new biologically-inspired training method named Continual Learning through Adjustment Suppression and Sparsity Promotion (CLASSP). CLASSP is based on two main principles observed in neuroscience, particularly in the context of synaptic transmission and Long-Term Potentiation (LTP). The first principle is a decay rate over the weight adjustment, which is implemented as a generalization of the AdaGrad optimization algorithm. This means that weights that have received many updates should have lower learning rates as they likely encode important information about previously seen data. However, this principle results in a diffuse distribution of updates throughout the model, as it promotes updates for weights that haven't been previously updated, while a sparse update distribution is preferred to leave weights unassigned for future tasks. Therefore, the second principle introduces a threshold on the loss gradient. This promotes sparse learning by updating a weight only if the loss gradient with respect to that weight is above a certain threshold, i.e. only updating weights with a significant impact on the current loss. Both principles reflect phenomena observed in LTP, where a threshold effect and a gradual saturation of potentiation have been observed. CLASSP is implemented in a Python/PyTorch class, making it applicable to any model. When compared with Elastic Weight Consolidation (EWC) using Computer Vision and sentiment analysis datasets, CLASSP demonstrates superior performance in terms of accuracy and memory footprint.
- Z. Chen and B. Liu, “Continual learning and catastrophic forgetting,” in Lifelong Machine Learning. Springer, 2018, pp. 55–75.
- F. Benzing, “Unifying regularisation methods for continual learning,” arXiv preprint arXiv:2006.06357, 2020.
- A. Aich, “Elastic weight consolidation (ewc): Nuts and bolts,” arXiv preprint arXiv:2105.04093, 2021.
- P. Kaushik, A. Gain, A. Kortylewski, and A. Yuille, “Understanding catastrophic forgetting and remembering in continual learning with optimal relevance mapping,” arXiv preprint arXiv:2102.11343, 2021.
- M. V. Kopanitsa, N. O. Afinowi, and S. G. Grant, “Recording long-term potentiation of synaptic transmission by three-dimensional multi-electrode arrays,” BMC neuroscience, vol. 7, pp. 1–19, 2006.
- N. Perez-Nieves and D. Goodman, “Sparse spiking gradient descent,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 795–11 808, 2021.
- J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.” Journal of machine learning research, vol. 12, no. 7, 2011.
- Y.-C. Hsu, Y.-C. Liu, A. Ramasamy, and Z. Kira, “Re-evaluating continual learning scenarios: A categorization and case for strong baselines,” 2019.
- B. Wang, H. Zhang, Z. Ma, and W. Chen, “Convergence of adagrad for non-convex objectives: Simple proofs and relaxed assumptions,” in The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023, pp. 161–190.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.