Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Models from the perspective of Continual Learning (1812.09111v1)

Published 21 Dec 2018 in cs.LG, cs.AI, and cs.CV

Abstract: Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online \footnote{\url{https://github.com/TLESORT/Generative\_Continual\_Learning}}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Timothée Lesort (26 papers)
  2. Hugo Caselles-Dupré (19 papers)
  3. Michael Garcia-Ortiz (8 papers)
  4. Andrei Stoian (9 papers)
  5. David Filliat (37 papers)
Citations (147)

Summary

We haven't generated a summary for this paper yet.