Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning (2209.06606v2)

Published 14 Sep 2022 in cs.CV and cs.LG

Abstract: Plasticity and stability are needed in class-incremental learning in order to learn from new data while preserving past knowledge. Due to catastrophic forgetting, finding a compromise between these two properties is particularly challenging when no memory buffer is available. Mainstream methods need to store two deep models since they integrate new classes using fine-tuning with knowledge distillation from the previous incremental state. We propose a method which has similar number of parameters but distributes them differently in order to find a better balance between plasticity and stability. Following an approach already deployed by transfer-based incremental methods, we freeze the feature extractor after the initial state. Classes in the oldest incremental states are trained with this frozen extractor to ensure stability. Recent classes are predicted using partially fine-tuned models in order to introduce plasticity. Our proposed plasticity layer can be incorporated to any transfer-based method designed for exemplar-free incremental learning, and we apply it to two such methods. Evaluation is done with three large-scale datasets. Results show that performance gains are obtained in all tested configurations compared to existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
  2. Deesil: Deep-shallow incremental learning. TaskCV Workshop @ ECCV 2018., 2018.
  3. Initial classifier weights replay for memoryless class incremental learning. In British Machine Vision Conference (BMVC), 2020.
  4. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135:38–54, 2021.
  5. End-to-end incremental learning. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XII, pp. 241–257, 2018.
  6. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XI, volume 11215 of Lecture Notes in Computer Science, pp.  556–572. Springer, 2018.
  7. Support-vector networks. Machine learning, 20(3):273–297, 1995.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp.  248–255, 2009.
  9. Self-supervised features improve open-world learning. arXiv preprint arXiv:2102.07848, 2021.
  10. Learning without memorizing. CoRR, abs/1811.08051, 2018.
  11. Podnet: Pooled outputs distillation for small-tasks incremental learning. In Computer vision-ECCV 2020-16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XX, volume 12365, pp.  86–102. Springer, 2020.
  12. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp.  220–221, 2020.
  13. Remind your neural network to prevent catastrophic forgetting. In European Conference on Computer Vision, pp.  466–483. Springer, 2020.
  14. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, CVPR, 2016.
  15. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015.
  16. Learning a unified classifier incrementally via rebalancing. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 831–839, 2019.
  17. Revisiting distillation and incremental classifier learning. CoRR, abs/1807.02802, 2018.
  18. Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  19. Continual learning: A comparative study on how to defy forgetting in classification tasks. CoRR, abs/1909.08383, 2019.
  20. Learning without forgetting. In European Conference on Computer Vision, ECCV, 2016.
  21. Packnet: Adding multiple tasks to a single network by iterative pruning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7765–7773, 2018.
  22. Class-incremental learning: survey and performance evaluation on image classification, 2021.
  23. Catastrophic interference in connectionist networks: The sequential learning problem. The Psychology of Learning and Motivation, 24:104–169, 1989.
  24. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE transactions on pattern analysis and machine intelligence, 35(11):2624–2637, 2013.
  25. The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Frontiers in Psychology, 4:504–504, 2013.
  26. What is being transferred in transfer learning? arXiv preprint arXiv:2008.11687, 2020.
  27. Large-scale image retrieval with attentive deep local features. In ICCV, pp.  3476–3485. IEEE Computer Society, 2017.
  28. Continual lifelong learning with neural networks: A review. Neural Networks, 113, 2019.
  29. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp.  8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
  30. Fetril: Feature translation for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.  3911–3920, January 2023.
  31. Gdumb: A simple approach that questions our progress in continual learning. In European Conference on Computer Vision, pp.  524–540. Springer, 2020.
  32. CNN features off-the-shelf: An astounding baseline for recognition. In Conference on Computer Vision and Pattern Recognition Workshop, CVPR-W, 2014.
  33. icarl: Incremental classifier and representation learning. In Conference on Computer Vision and Pattern Recognition, CVPR, 2017.
  34. The extreme value machine. IEEE transactions on pattern analysis and machine intelligence, 40(3):762–768, 2017.
  35. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  36. Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning, pp. 4548–4557. PMLR, 2018.
  37. Always be dreaming: A new approach for data-free class-incremental learning. arXiv preprint arXiv:2106.09701, 2021.
  38. A survey on deep transfer learning. In International conference on artificial neural networks, pp.  270–279. Springer, 2018.
  39. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  8769–8778, 2018.
  40. Efficient feature transformations for discriminative and generative continual learning. CoRR, abs/2103.13558, 2021.
  41. Isolation and impartial aggregation: A paradigm of incremental learning without interference. In 2023 AAAI Conference, 2023.
  42. Striking a balance between stability and plasticity for class-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  1124–1133, 2021.
  43. Large scale incremental learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 374–382, 2019.
  44. How transferable are features in deep neural networks? Advances in neural information processing systems, 27, 2014.
  45. Semantic drift compensation for class-incremental learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 6980–6989. IEEE, 2020.
  46. M2KD: multi-model and multi-level knowledge distillation for incremental learning. CoRR, abs/1904.01769, 2019.
  47. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  5871–5880, June 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Grégoire Petit (5 papers)
  2. Adrian Popescu (39 papers)
  3. Eden Belouadah (10 papers)
  4. David Picard (44 papers)
  5. Bertrand Delezoide (5 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.