Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Practical Recommendations for Replay-based Continual Learning Methods (2203.10317v1)

Published 19 Mar 2022 in cs.LG

Abstract: Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge. Several approaches have been developed in the literature to tackle the Continual Learning challenge. Among them, Replay approaches have empirically proved to be the most effective ones. Replay operates by saving some samples in memory which are then used to rehearse knowledge during training in subsequent tasks. However, an extensive comparison and deeper understanding of different replay implementation subtleties is still missing in the literature. The aim of this work is to compare and analyze existing replay-based strategies and provide practical recommendations on developing efficient, effective and generally applicable replay-based strategies. In particular, we investigate the role of the memory size value, different weighting policies and discuss about the impact of data augmentation, which allows reaching better performance with lower memory sizes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gabriele Merlin (3 papers)
  2. Vincenzo Lomonaco (58 papers)
  3. Andrea Cossu (25 papers)
  4. Antonio Carta (29 papers)
  5. Davide Bacciu (107 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.