Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative replay with feedback connections as a general strategy for continual learning (1809.10635v2)

Published 27 Sep 2018 in cs.LG, cs.AI, cs.CV, and stat.ML

Abstract: A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one. Numerous methods for alleviating catastrophic forgetting are currently being proposed, but differences in evaluation protocols make it difficult to directly compare their performance. To enable more meaningful comparisons, here we identified three distinct scenarios for continual learning based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as "soft targets") achieved superior performance in all three scenarios. Addressing the issue of efficiency, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback or backward connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.

Generative Replay with Feedback Connections for Continual Learning: An Analysis

The paper "Generative replay with feedback connections as a general strategy for continual learning" by Gido M. van de Ven and Andreas S. Tolias explores a strategy for addressing the challenge of catastrophic forgetting in artificial neural networks (ANNs). Continual learning, essential for lifelong AI, suffers from catastrophic forgetting when models lose previously acquired knowledge upon training for new tasks. This paper provides a comprehensive evaluation of generative replay with feedback connections, presents a classification of continual learning scenarios, and introduces an efficient implementation approach termed Replay-through-Feedback (RtF).

Key Contributions

  1. Continual Learning Scenarios: The paper categorizes continual learning into three scenarios: Task-Incremental Learning (Task-IL), Domain-Incremental Learning (Domain-IL), and Class-Incremental Learning (Class-IL). These scenarios vary based on the availability and requirement of task identity information both during training and testing. This classification aids in more consistent evaluation and comparison of continual learning methods.
  2. Generative Replay with Distillation: This approach combines generative replay, where a separate generative model synthesizes pseudo-exemplars of past tasks, with distillation. It was found to excel across all scenarios, notably the challenging Class-IL, where other techniques like Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI) tend to falter.
  3. Replay-through-Feedback (RtF): A novel integration of generative replay into the main model, reducing computational costs. In RtF, feedback connections within the model facilitate in-situ generation of past tasks' representation, streamlining the architecture traditionally requiring a separate generative component.

Numerical Results

The paper presents a rigorous comparison of various methods using split and permuted MNIST task protocols, revealing that regularization approaches such as EWC and SI show limitations, particularly with the Class-IL scenario. The performance metrics for generative replay strategies are notably robust, achieving accuracy rates above 90% in Class-IL, significantly surpassing the sub-20% rates of simpler methods.

Implications and Future Work

The robustness of generative replay combined with distillation suggests it as a viable framework for scalable lifelong learning. Its application to more complex datasets and tasks remains an open field. Additionally, the RtF technique offers a promising direction to reduce training times while maintaining high performance, essential for real-time and resource-constrained environments.

Moving forward, the paper's findings imply that further exploration of integrating latent variable models and enhancing the quality of generated pseudo-data could yield even greater benefits. The authors rightly suggest that as generative models evolve, the scalability of these methods in more complicated input spaces will become more feasible.

In conclusion, this paper contributes significantly to continual learning research by offering a well-rounded evaluation of current methodologies and suggesting scalable implementations like RtF. The framework of scenarios it proposes will likely serve as a valuable guideline for future studies in the domain, pointing towards a consolidated and systematic approach to tackling catastrophic forgetting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gido M. van de Ven (17 papers)
  2. Andreas S. Tolias (20 papers)
Citations (216)