Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Models are Continual Learners (2112.04215v2)

Published 8 Dec 2021 in cs.CV and cs.LG

Abstract: Self-supervised models have been shown to produce comparable or better visual representations than their supervised counterparts when trained offline on unlabeled data at scale. However, their efficacy is catastrophically reduced in a Continual Learning (CL) scenario where data is presented to the model sequentially. In this paper, we show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for CL by adding a predictor network that maps the current state of the representations to their past state. This enables us to devise a framework for Continual self-supervised visual representation Learning that (i) significantly improves the quality of the learned representations, (ii) is compatible with several state-of-the-art self-supervised objectives, and (iii) needs little to no hyperparameter tuning. We demonstrate the effectiveness of our approach empirically by training six popular self-supervised models in various CL settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Enrico Fini (23 papers)
  2. Victor G. Turrisi da Costa (5 papers)
  3. Xavier Alameda-Pineda (69 papers)
  4. Elisa Ricci (137 papers)
  5. Karteek Alahari (48 papers)
  6. Julien Mairal (98 papers)
Citations (137)