Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relational Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship (2112.15402v3)

Published 31 Dec 2021 in cs.LG and cs.CV

Abstract: Continual learning is a promising machine learning paradigm to learn new tasks while retaining previously learned knowledge over streaming training data. Till now, rehearsal-based methods, keeping a small part of data from old tasks as a memory buffer, have shown good performance in mitigating catastrophic forgetting for previously learned knowledge. However, most of these methods typically treat each new task equally, which may not adequately consider the relationship or similarity between old and new tasks. Furthermore, these methods commonly neglect sample importance in the continual training process and result in sub-optimal performance on certain tasks. To address this challenging problem, we propose Relational Experience Replay (RER), a bi-level learning framework, to adaptively tune task-wise relationships and sample importance within each task to achieve a better stability' andplasticity' trade-off. As such, the proposed method is capable of accumulating new knowledge while consolidating previously learned old knowledge during continual learning. Extensive experiments conducted on three publicly available datasets (i.e., CIFAR-10, CIFAR-100, and Tiny ImageNet) show that the proposed method can consistently improve the performance of all baselines and surpass current state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Quanziang Wang (8 papers)
  2. Renzhen Wang (13 papers)
  3. Yuexiang Li (50 papers)
  4. Dong Wei (70 papers)
  5. Kai Ma (126 papers)
  6. Yefeng Zheng (197 papers)
  7. Deyu Meng (182 papers)
Citations (3)