Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Effectiveness of Equivariant Regularization for Robust Online Continual Learning (2305.03648v1)

Published 5 May 2023 in cs.LG

Abstract: Humans can learn incrementally, whereas neural networks forget previously acquired information catastrophically. Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks (backward transfer) and future ones (forward transfer) during training. Recent research has shown that self-supervision can produce versatile models that can generalize well to diverse downstream tasks. However, contrastive self-supervised learning (CSSL), a popular self-supervision technique, has limited effectiveness in online CL (OCL). OCL only permits one iteration of the input dataset, and CSSL's low sample efficiency hinders its use on the input data-stream. In this work, we propose Continual Learning via Equivariant Regularization (CLER), an OCL approach that leverages equivariant tasks for self-supervision, avoiding CSSL's limitations. Our method represents the first attempt at combining equivariant knowledge with CL and can be easily integrated with existing OCL methods. Extensive ablations shed light on how equivariant pretext tasks affect the network's information flow and its impact on CL dynamics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lorenzo Bonicelli (13 papers)
  2. Matteo Boschini (17 papers)
  3. Emanuele Frascaroli (5 papers)
  4. Angelo Porrello (32 papers)
  5. Matteo Pennisi (11 papers)
  6. Giovanni Bellitto (13 papers)
  7. Simone Palazzo (34 papers)
  8. Concetto Spampinato (48 papers)
  9. Simone Calderara (64 papers)
Citations (3)