Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Future-Proofing Class Incremental Learning (2404.03200v1)

Published 4 Apr 2024 in cs.LG and cs.CV

Abstract: Exemplar-Free Class Incremental Learning is a highly challenging setting where replay memory is unavailable. Methods relying on frozen feature extractors have drawn attention recently in this setting due to their impressive performances and lower computational costs. However, those methods are highly dependent on the data used to train the feature extractor and may struggle when an insufficient amount of classes are available during the first incremental step. To overcome this limitation, we propose to use a pre-trained text-to-image diffusion model in order to generate synthetic images of future classes and use them to train the feature extractor. Experiments on the standard benchmarks CIFAR100 and ImageNet-Subset demonstrate that our proposed method can be used to improve state-of-the-art methods for exemplar-free class incremental learning, especially in the most difficult settings where the first incremental step only contains few classes. Moreover, we show that using synthetic samples of future classes achieves higher performance than using real data from different classes, paving the way for better and less costly pre-training methods for incremental learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. Ss-il: Separated softmax for incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 844–853, October 2021.
  2. Effects of auxiliary knowledge on continual learning. In 2022 26th International Conference on Pattern Recognition (ICPR), pages 1357–1363. IEEE, 2022.
  3. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135:38–54, Mar. 2021.
  4. This dataset does not exist: training models from generated images. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2020.
  5. Language models are few-shot learners, 2020.
  6. On tiny episodic memories in continual learning, 2019.
  7. Diffusepast: Diffusion-based generative replay for class incremental semantic segmentation, 2023.
  8. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  9. Podnet: Pooled outputs distillation for small-tasks incremental learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 86–102. Springer, 2020.
  10. Insights from the future for continual learning. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, June 2021.
  11. Dream the impossible: Outlier imagination with diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
  12. Prompt-based exemplar super-compression and regeneration for class-incremental learning, 2023.
  13. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 220–221, 2020.
  14. Is synthetic data from generative models ready for image recognition? arXiv preprint arXiv:2210.07574, 2022.
  15. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  16. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
  17. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  18. Generative models as a data source for multiview representation learning. In International Conference on Learning Representations, 2022.
  19. A simple baseline that questions the use of pretrained-models in continual learning. In NeurIPS 2022 Workshop on Distribution Shifts: Connecting Methods and Applications, 2022.
  20. Balanced softmax cross-entropy for incremental learning. In Igor Farkaš, Paolo Masulli, Sebastian Otte, and Stefan Wermter, editors, Artificial Neural Networks and Machine Learning – ICANN 2021, pages 385–396, Cham, 2021. Springer International Publishing.
  21. Balanced softmax cross-entropy for incremental learning with and without memory. Computer Vision and Image Understanding, 225:103582, 2022.
  22. Class-incremental learning using diffusion model for distillation and replay. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pages 3425–3433, October 2023.
  23. Sddgr: Stable diffusion-based deep generative replay for class incremental object detection, 2024.
  24. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
  25. Pseudo-labeling for class incremental learning. In BMVC 2021: The British Machine Vision Conference, 2021.
  26. Overcoming catastrophic forgetting with unlabeled data in the wild. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 312–321, 2019.
  27. Wakening past concepts without past data: Class-incremental learning from online placebos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2226–2235, January 2024.
  28. Cvpr 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions. Artificial Intelligence, 303:103635, Feb. 2022.
  29. Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pages 109 – 165. Academic Press, 1989.
  30. George A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41, nov 1995.
  31. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. Advances in Neural Information Processing Systems, 36, 2024.
  32. Continual learning with foundation models: An empirical study of latent replay, 2022.
  33. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
  34. Class-incremental learning with pre-allocated fixed classifiers. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 6259–6266. IEEE, 2021.
  35. Fetril: Feature translation for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3911–3920, January 2023.
  36. An analysis of initial training strategies for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1837–1847, January 2024.
  37. Hierarchical text-conditional image generation with clip latents, 2022.
  38. Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review, 97(2):285, 1990.
  39. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  40. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695, June 2022.
  41. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
  42. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479–36494, 2022.
  43. Fake it till you make it: Learning transferable representations from synthetic imagenet clones. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8011–8021, June 2023.
  44. Fill-Up: Balancing Long-Tailed Data with Generative Models. arXiv preprint arXiv:2306.07200, 2023.
  45. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, June 2023.
  46. Deep unsupervised learning using nonequilibrium thermodynamics, 2015.
  47. Stablerep: Synthetic images from text-to-image models make strong visual representation learners, 2023.
  48. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008.
  49. Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision, pages 631–648. Springer, 2022.
  50. Learning to prompt for continual learning. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, June 2022.
  51. Striking a balance between stability and plasticity for class-incremental learning. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1104–1113, 2021.
  52. Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 374–382, 2019.
  53. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014–3023, 2021.
  54. mixup: Beyond empirical risk minimization, 2017.
  55. Class-incremental learning via deep model consolidation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1131–1140, 2020.
  56. Expanding small-scale datasets with guided imagination. Advances in Neural Information Processing Systems, 36, 2024.
  57. Continual learning with pre-trained models: A survey, 2024.
  58. Forward compatible few-shot class-incremental learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9046–9056, 2022.
  59. Pycil: a python toolbox for class-incremental learning. SCIENCE CHINA Information Sciences, 66(9):197101–, 2023.
  60. Deep class-incremental learning: A survey, 2023.
  61. A model or 603 exemplars: Towards memory-efficient class-incremental learning. In ICLR, 2023.
  62. Class-incremental learning via dual augmentation. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  63. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5871–5880, June 2021.
  64. Self-sustaining representation expansion for non-exemplar class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9296–9305, June 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets