Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Looking through the past: better knowledge retention for generative replay in continual learning (2309.10012v1)

Published 18 Sep 2023 in cs.LG, cs.AI, and cs.CV

Abstract: In this work, we improve the generative replay in a continual learning setting to perform well on challenging scenarios. Current generative rehearsal methods are usually benchmarked on small and simple datasets as they are not powerful enough to generate more complex data with a greater number of classes. We notice that in VAE-based generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space. Therefore, we propose three modifications that allow the model to learn and generate complex data. More specifically, we incorporate the distillation in latent space between the current and previous models to reduce feature drift. Additionally, a latent matching for the reconstruction and original data is proposed to improve generated features alignment. Further, based on the observation that the reconstructions are better for preserving knowledge, we add the cycling of generations through the previously trained model to make them closer to the original data. Our method outperforms other generative replay methods in various scenarios. Code available at https://github.com/valeriya-khan/looking-through-the-past.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Valeriya Khan (2 papers)
  2. Sebastian Cygert (18 papers)
  3. Kamil Deja (27 papers)
  4. Tomasz Trzciński (116 papers)
  5. Bartłomiej Twardowski (37 papers)
Citations (5)
Github Logo Streamline Icon: https://streamlinehq.com