Learning to Continually Learn Rapidly from Few and Noisy Data (2103.04066v1)
Abstract: Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution. Continual learning could be achieved via replay -- by concurrently training externally stored old data while learning a new task. However, replay becomes less effective when each past task is allocated with less memory. To overcome this difficulty, we supplemented replay mechanics with meta-learning for rapid knowledge acquisition. By employing a meta-learner, which \textit{learns a learning rate per parameter per past task}, we found that base learners produced strong results when less memory was available. Additionally, our approach inherited several meta-learning advantages for continual learning: it demonstrated strong robustness to continually learn under the presence of noises and yielded base learners to higher accuracy in less updates.
- Nicholas I-Hsien Kuo (9 papers)
- Mehrtash Harandi (108 papers)
- Nicolas Fourrier (4 papers)
- Christian Walder (30 papers)
- Gabriela Ferraro (7 papers)
- Hanna Suominen (17 papers)