Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning (2003.05856v3)

Published 12 Mar 2020 in cs.AI and cs.LG

Abstract: Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones. Two recent continual-learning scenarios have opened new avenues of research. In meta-continual learning, the model is pre-trained to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We empirically show that Continual-MAML is better suited to the new scenario than the aforementioned methodologies, as well as standard continual learning and meta-learning approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Massimo Caccia (28 papers)
  2. Oleksiy Ostapenko (10 papers)
  3. Fabrice Normandin (4 papers)
  4. Min Lin (96 papers)
  5. Lucas Caccia (22 papers)
  6. Issam Laradji (37 papers)
  7. Irina Rish (85 papers)
  8. Alexandre Lacoste (42 papers)
  9. David Vazquez (73 papers)
  10. Laurent Charlin (51 papers)
  11. Pau Rodriguez (35 papers)
Citations (65)

Summary

We haven't generated a summary for this paper yet.