Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 57 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

CLASSP: a Biologically-Inspired Approach to Continual Learning through Adjustment Suppression and Sparsity Promotion (2405.09637v2)

Published 29 Apr 2024 in cs.NE, cs.AI, and cs.LG

Abstract: This paper introduces a new biologically-inspired training method named Continual Learning through Adjustment Suppression and Sparsity Promotion (CLASSP). CLASSP is based on two main principles observed in neuroscience, particularly in the context of synaptic transmission and Long-Term Potentiation (LTP). The first principle is a decay rate over the weight adjustment, which is implemented as a generalization of the AdaGrad optimization algorithm. This means that weights that have received many updates should have lower learning rates as they likely encode important information about previously seen data. However, this principle results in a diffuse distribution of updates throughout the model, as it promotes updates for weights that haven't been previously updated, while a sparse update distribution is preferred to leave weights unassigned for future tasks. Therefore, the second principle introduces a threshold on the loss gradient. This promotes sparse learning by updating a weight only if the loss gradient with respect to that weight is above a certain threshold, i.e. only updating weights with a significant impact on the current loss. Both principles reflect phenomena observed in LTP, where a threshold effect and a gradual saturation of potentiation have been observed. CLASSP is implemented in a Python/PyTorch class, making it applicable to any model. When compared with Elastic Weight Consolidation (EWC) using Computer Vision and sentiment analysis datasets, CLASSP demonstrates superior performance in terms of accuracy and memory footprint.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. Z. Chen and B. Liu, “Continual learning and catastrophic forgetting,” in Lifelong Machine Learning.   Springer, 2018, pp. 55–75.
  2. F. Benzing, “Unifying regularisation methods for continual learning,” arXiv preprint arXiv:2006.06357, 2020.
  3. A. Aich, “Elastic weight consolidation (ewc): Nuts and bolts,” arXiv preprint arXiv:2105.04093, 2021.
  4. P. Kaushik, A. Gain, A. Kortylewski, and A. Yuille, “Understanding catastrophic forgetting and remembering in continual learning with optimal relevance mapping,” arXiv preprint arXiv:2102.11343, 2021.
  5. M. V. Kopanitsa, N. O. Afinowi, and S. G. Grant, “Recording long-term potentiation of synaptic transmission by three-dimensional multi-electrode arrays,” BMC neuroscience, vol. 7, pp. 1–19, 2006.
  6. N. Perez-Nieves and D. Goodman, “Sparse spiking gradient descent,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 795–11 808, 2021.
  7. J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.” Journal of machine learning research, vol. 12, no. 7, 2011.
  8. Y.-C. Hsu, Y.-C. Liu, A. Ramasamy, and Z. Kira, “Re-evaluating continual learning scenarios: A categorization and case for strong baselines,” 2019.
  9. B. Wang, H. Zhang, Z. Ma, and W. Chen, “Convergence of adagrad for non-convex objectives: Simple proofs and relaxed assumptions,” in The Thirty Sixth Annual Conference on Learning Theory.   PMLR, 2023, pp. 161–190.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 posts and received 2 likes.

Reddit Logo Streamline Icon: https://streamlinehq.com