Papers
Topics
Authors
Recent
2000 character limit reached

Regularization Shortcomings for Continual Learning

Published 6 Dec 2019 in cs.LG and stat.ML | (1912.03049v4)

Abstract: In most machine learning algorithms, training data is assumed to be independent and identically distributed (iid). When it is not the case, the algorithm's performances are challenged, leading to the famous phenomenon of catastrophic forgetting. Algorithms dealing with it are gathered in the Continual Learning research field. In this paper, we study the regularization based approaches to continual learning and show that those approaches can not learn to discriminate classes from different tasks in an elemental continual benchmark: the class-incremental scenario. We make theoretical reasoning to prove this shortcoming and illustrate it with examples and experiments. Moreover, we show that it can have some important consequences on continual multi-tasks reinforcement learning or in pre-trained models used for continual learning. We believe that highlighting and understanding the shortcomings of regularization strategies will help us to use them more efficiently.

Citations (47)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.