Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement (2303.10891v3)
Abstract: This paper investigates a new, practical, but challenging problem named Non-exemplar Online Class-incremental continual Learning (NO-CL), which aims to preserve the discernibility of base classes without buffering data examples and efficiently learn novel classes continuously in a single-pass (i.e., online) data stream. The challenges of this task are mainly two-fold: (1) Both base and novel classes suffer from severe catastrophic forgetting as no previous samples are available for replay. (2) As the online data can only be observed once, there is no way to fully re-train the whole model, e.g., re-calibrate the decision boundaries via prototype alignment or feature distillation. In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction. 2) Self-augment and refinement: Instead of updating the whole network, we optimize high-dimensional prototypes alternatively with the extra projection module based on self-augment vanilla prototypes, through a bi-level optimization problem. Extensive experiments demonstrate the effectiveness and superiority of the proposed DSR in NO-CL.
- Online Continual Learning with Maximal Interfered Retrieval. In NeurIPS.
- Online Continual Learning under Extreme Memory Constraints. In ECCV, 720–735.
- Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency. In CVPR, 7442–7451.
- Online Continual Learning through Mutual Information Maximization. In ICML.
- Learning a Unified Classifier Incrementally via Rebalancing. In CVPR.
- Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey. IEEE TPAMI, 43(11): 4037–4058.
- S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning. In ECCV, 432–448. Cham.
- Supervised Contrastive Learning. In NeurIPS, 18661–18673.
- Learning multiple layers of features from tiny images. In Technical Report.
- Self-supervised Label Augmentation via Input Transformations. In ICML, 5714–5724.
- CORe50: a New Dataset and Benchmark for Continuous Object Recognition. In CoRL, 17–26.
- Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning. In CVPR Workshops, 3589–3599.
- Few-Shot Class-Incremental Learning from an Open-Set Perspective. In ECCV, 382–397.
- GDumb: A Simple Approach that Questions Our Progress in Continual Learning. In ECCV, 524–540.
- Online Class-Incremental Continual Learning with Adversarial Shapley Value. AAAI.
- Matching Networks for One Shot Learning. In Lee, D.; Sugiyama, M.; Luxburg, U.; Guyon, I.; and Garnett, R., eds., NeurIPS.
- Large Scale Incremental Learning. In CVPR.
- Forward Compatible Few-Shot Class-Incremental Learning. In CVPR, 9046–9056.
- PyCIL: A Python Toolbox for Class-Incremental Learning. SCIENCE CHINA Information Sciences.
- Class-Incremental Learning via Dual Augmentation. In NeurIPS, 14306–14318.
- Prototype Augmentation and Self-Supervision for Incremental Learning. In CVPR, 5871–5880.
- Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. In CVPR, 9296–9305.