Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Continually Learn Rapidly from Few and Noisy Data (2103.04066v1)

Published 6 Mar 2021 in cs.LG

Abstract: Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution. Continual learning could be achieved via replay -- by concurrently training externally stored old data while learning a new task. However, replay becomes less effective when each past task is allocated with less memory. To overcome this difficulty, we supplemented replay mechanics with meta-learning for rapid knowledge acquisition. By employing a meta-learner, which \textit{learns a learning rate per parameter per past task}, we found that base learners produced strong results when less memory was available. Additionally, our approach inherited several meta-learning advantages for continual learning: it demonstrated strong robustness to continually learn under the presence of noises and yielded base learners to higher accuracy in less updates.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nicholas I-Hsien Kuo (9 papers)
  2. Mehrtash Harandi (107 papers)
  3. Nicolas Fourrier (4 papers)
  4. Christian Walder (30 papers)
  5. Gabriela Ferraro (7 papers)
  6. Hanna Suominen (17 papers)
Citations (4)