Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

REAL: Representation Enhanced Analytic Learning for Exemplar-free Class-incremental Learning (2403.13522v1)

Published 20 Mar 2024 in cs.LG and cs.CV

Abstract: Exemplar-free class-incremental learning (EFCIL) aims to mitigate catastrophic forgetting in class-incremental learning without available historical data. Compared with its counterpart (replay-based CIL) that stores historical samples, the EFCIL suffers more from forgetting issues under the exemplar-free constraint. In this paper, inspired by the recently developed analytic learning (AL) based CIL, we propose a representation enhanced analytic learning (REAL) for EFCIL. The REAL constructs a dual-stream base pretraining (DS-BPT) and a representation enhancing distillation (RED) process to enhance the representation of the extractor. The DS-BPT pretrains model in streams of both supervised learning and self-supervised contrastive learning (SSCL) for base knowledge extraction. The RED process distills the supervised knowledge to the SSCL pretrained backbone and facilitates a subsequent AL-basd CIL that converts the CIL to a recursive least-square problem. Our method addresses the issue of insufficient discriminability in representations of unseen data caused by a frozen backbone in the existing AL-based CIL. Empirical results on various datasets including CIFAR-100, ImageNet-100 and ImageNet-1k, demonstrate that our REAL outperforms the state-of-the-arts in EFCIL, and achieves comparable or even more superior performance compared with the replay-based methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135:38–54, 2021.
  2. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, pages 1597–1607. PMLR, 2020.
  3. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15750–15758, 2021.
  4. Podnet: Pooled outputs distillation for small-tasks incremental learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 86–102. Springer, 2020.
  5. Bootstrap your own latent - a new approach to self-supervised learning. In Advances in Neural Information Processing Systems, pages 21271–21284. Curran Associates, Inc., 2020.
  6. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications, pages 321–326, 2001.
  7. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  8. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  9. Distilling the knowledge in a neural network, 2015.
  10. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  11. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4037–4058, 2021.
  12. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122, 2016.
  13. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  14. Learning multiple layers of features from tiny images. 2009.
  15. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935–2947, 2018.
  16. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2262–2268, 2018.
  17. Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 35(1):857–876, 2023a.
  18. Adaptive aggregation networks for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2544–2553, 2021a.
  19. Rmm: Reinforced memory management for class-incremental learning. Advances in Neural Information Processing Systems, 34, 2021b.
  20. Online hyperparameter optimization for class-incremental learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7):8906–8913, 2023b.
  21. Fetril: Feature translation for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3911–3920, 2023.
  22. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  23. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
  24. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
  25. Prototype reminiscence and augmented asymmetric knowledge aggregation for non-exemplar class-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1772–1781, 2023.
  26. Foster: Feature boosting and compression for class-incremental learning. In Computer Vision – ECCV 2022, pages 398–414, Cham, 2022. Springer Nature Switzerland.
  27. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017a.
  28. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017b.
  29. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International Conference on Learning Representations, 2017.
  30. Class-incremental learning via dual augmentation. In Advances in Neural Information Processing Systems, pages 14306–14318. Curran Associates, Inc., 2021a.
  31. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5871–5880, 2021b.
  32. Self-sustaining representation expansion for non-exemplar class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9296–9305, 2022.
  33. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1–14, 2021.
  34. Acil: Analytic class-incremental learning with absolute memorization and privacy protection. In Advances in Neural Information Processing Systems, pages 11602–11614. Curran Associates, Inc., 2022.
  35. Gkeal: Gaussian kernel embedded analytic learning for few-shot class incremental task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7746–7755, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Run He (9 papers)
  2. Huiping Zhuang (44 papers)
  3. Di Fang (26 papers)
  4. Yizhu Chen (5 papers)
  5. Kai Tong (7 papers)
  6. Cen Chen (81 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets