Papers
Topics
Authors
Recent
2000 character limit reached

Brain-inspired feature exaggeration in generative replay for continual learning

Published 26 Oct 2021 in cs.LG, cs.AI, and cs.NE | (2110.15056v2)

Abstract: The catastrophic forgetting of previously learnt classes is one of the main obstacles to the successful development of a reliable and accurate generative continual learning model. When learning new classes, the internal representation of previously learnt ones can often be overwritten, resulting in the model's "memory" of earlier classes being lost over time. Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference. Applying a targeted exaggeration of the differences between features of similar, yet competing memories, the brain can more easily distinguish and recall them. In this paper, the application of such exaggeration, via the repulsion of replayed samples belonging to competing classes, is explored. Through the development of a 'reconstruction repulsion' loss, this paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.