Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rehearsal revealed: The limits and merits of revisiting samples in continual learning (2104.07446v1)

Published 15 Apr 2021 in cs.LG and cs.CV

Abstract: Learning from non-stationary data streams and overcoming catastrophic forgetting still poses a serious challenge for machine learning research. Rather than aiming to improve state-of-the-art, in this work we provide insight into the limits and merits of rehearsal, one of continual learning's most established methods. We hypothesize that models trained sequentially with rehearsal tend to stay in the same low-loss region after a task has finished, but are at risk of overfitting on its sample memory, hence harming generalization. We provide both conceptual and strong empirical evidence on three benchmarks for both behaviors, bringing novel insights into the dynamics of rehearsal and continual learning in general. Finally, we interpret important continual learning works in the light of our findings, allowing for a deeper understanding of their successes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Eli Verwimp (6 papers)
  2. Matthias De Lange (12 papers)
  3. Tinne Tuytelaars (150 papers)
Citations (90)

Summary

We haven't generated a summary for this paper yet.