Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapt & Align: Continual Learning with Generative Models Latent Space Alignment (2312.13699v1)

Published 21 Dec 2023 in cs.LG

Abstract: In this work, we introduce Adapt & Align, a method for continual learning of neural networks by aligning latent representations in generative models. Neural Networks suffer from abrupt loss in performance when retrained with additional training data from different distributions. At the same time, training with additional data without access to the previous examples rarely improves the model's performance. In this work, we propose a new method that mitigates those problems by employing generative models and splitting the process of their update into two parts. In the first one, we train a local generative model using only data from a new task. In the second phase, we consolidate latent representations from the local model with a global one that encodes knowledge of all past experiences. We introduce our approach with Variational Auteoncoders and Generative Adversarial Networks. Moreover, we show how we can use those generative models as a general method for continual knowledge consolidation that can be used in downstream tasks such as classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kamil Deja (27 papers)
  2. Bartosz Cywiński (5 papers)
  3. Jan Rybarczyk (1 paper)
  4. Tomasz Trzciński (116 papers)

Summary

We haven't generated a summary for this paper yet.