Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning (2211.11174v6)

Published 21 Nov 2022 in cs.CV

Abstract: In AI, generalization refers to a model's ability to perform well on out-of-distribution data related to the given task, beyond the data it was trained on. For an AI agent to excel, it must also possess the continual learning capability, whereby an agent incrementally learns to perform a sequence of tasks without forgetting the previously acquired knowledge to solve the old tasks. Intuitively, generalization within a task allows the model to learn underlying features that can readily be applied to novel tasks, facilitating quicker learning and enhanced performance in subsequent tasks within a continual learning framework. Conversely, continual learning methods often include mechanisms to mitigate catastrophic forgetting, ensuring that knowledge from earlier tasks is retained. This preservation of knowledge over tasks plays a role in enhancing generalization for the ongoing task at hand. Despite the intuitive appeal of the interplay of both abilities, existing literature on continual learning and generalization has proceeded separately. In the preliminary effort to promote studies that bridge both fields, we first present empirical evidence showing that each of these fields has a mutually positive effect on the other. Next, building upon this finding, we introduce a simple and effective technique known as Shape-Texture Consistency Regularization (STCR), which caters to continual learning. STCR learns both shape and texture representations for each task, consequently enhancing generalization and thereby mitigating forgetting. Remarkably, extensive experiments validate that our STCR, can be seamlessly integrated with existing continual learning methods, where its performance surpasses these continual learning methods in isolation or when combined with established generalization techniques by a large margin. Our data and source code will be made publicly available upon publication.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Mathematics into Type, American Mathematical Society. Online available:
  2. The LATEXCompanion, by F. Mittelbach and M. Goossens
  3. More Math into LaTeX, by G. Grätzer
  4. AMS-StyleGuide-online.pdf, published by the American Mathematical Society
  5. H. Sira-Ramirez. “On the sliding mode control of nonlinear systems,” Systems & Control Letters, vol. 19, pp. 303–312, 1992.
  6. A. Levant. “Exact differentiation of signals with unbounded higher derivatives,” in Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, California, USA, pp. 5585–5590, 2006.
  7. M. Fliess, C. Join, and H. Sira-Ramirez. “Non-linear estimation is easy,” International Journal of Modelling, Identification and Control, vol. 4, no. 1, pp. 12–27, 2008.
  8. R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. “Stabilization of food-chain systems using a port-controlled Hamiltonian description,” in Proceedings of the American Control Conference, Chicago, Illinois, USA, pp. 2245–2249, 2000.
Citations (1)

Summary

We haven't generated a summary for this paper yet.