Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Pre-Training Mitigates Forgetting in Language and Vision (2205.09357v1)

Published 19 May 2022 in cs.LG and cs.AI

Abstract: Pre-trained models are nowadays a fundamental component of machine learning research. In continual learning, they are commonly used to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during continual learning. We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We show that continually pre-trained models are robust against catastrophic forgetting and we provide strong empirical evidence supporting the fact that self-supervised pre-training is more effective in retaining previous knowledge than supervised protocols. Code is provided at https://github.com/AndreaCossu/continual-pretraining-nlp-vision .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Andrea Cossu (25 papers)
  2. Tinne Tuytelaars (150 papers)
  3. Antonio Carta (29 papers)
  4. Lucia Passaro (8 papers)
  5. Vincenzo Lomonaco (58 papers)
  6. Davide Bacciu (107 papers)
Citations (58)