Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning (2405.19074v1)

Published 29 May 2024 in cs.CV and cs.AI

Abstract: Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks. Their performance drops drastically in more challenging settings starting with a smaller first task. To address this problem of feature drift estimation for exemplar-free methods, we propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space. We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly. We exploit the fact that adversarial samples are transferable from the old to the new feature space in a continual learning setting. The generation of these images is simple and computationally cheap. We demonstrate in our experiments that the proposed approach better tracks the movement of prototypes in embedding space and outperforms existing methods on several standard continual learning benchmarks as well as on fine-grained datasets. Code is available at https://github.com/dipamgoswami/ADC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Synthesizing robust adversarial examples. In International Conference on Machine Learning (ICML), 2018.
  2. Il2m: Class incremental learning with dual memory. In International Conference on Computer Vision (ICCV), 2019.
  3. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In European Conference on Computer Vision (ECCV), 2018.
  4. A continual learning survey: Defying forgetting in classification tasks. Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2021.
  5. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
  6. Learning without memorizing. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  7. Boosting adversarial attacks with momentum. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  8. Podnet: Pooled outputs distillation for small-tasks incremental learning. In European Conference on Computer Vision (ECCV), 2020.
  9. Adversarial continual learning. In European Conference on Computer Vision (ECCV), 2020.
  10. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICML), 2015.
  11. FeCAM: Exploiting the heterogeneity of class distributions in exemplar-free continual learning. In Advances in Neural Information Processing Systems (NeurIPS), 2023.
  12. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  13. Learning a unified classifier incrementally via rebalancing. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  14. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  15. Feature space perturbations yield more transferable adversarial examples. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  16. A simple baseline that questions the use of pretrained-models in continual learning. In NeurIPS 2022 Workshop on Distribution Shifts: Connecting Methods and Applications, 2022.
  17. Gradient-based editing of memory examples for online task-free continual learning. Advances in Neural Information Processing Systems (NeurIPS), 2021.
  18. Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2018.
  19. 3d object representations for fine-grained categorization. In International Conference on Computer Vision (ICCV-W) Workshops, 2013.
  20. Learning multiple layers of features from tiny images. 2009.
  21. Retrospective adversarial replay for continual learning. Advances in Neural Information Processing Systems (NeurIPS), 2022.
  22. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR), 2016.
  23. Tiny imagenet visual recognition challenge. CS 231N, 2015.
  24. Learning without forgetting. Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2017.
  25. Augmented box replay: Overcoming foreground shift for incremental object detection. In International Conference on Computer Vision (ICCV), 2023.
  26. Progressive voronoi diagram subdivision enables accurate data-free class-incremental learning. In International Conference on Learning Representations (ICLR), 2023.
  27. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICML), 2018.
  28. Napa-vq: Neighborhood-aware prototype augmentation with vector quantization for continual learning. In International Conference on Computer Vision (ICCV), 2023.
  29. Class-incremental learning: survey and performance evaluation. Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2022.
  30. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Elsevier, 1989.
  31. Deepfool: a simple and accurate method to fool deep neural networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  32. Universal adversarial perturbations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  33. Inceptionism: Going deeper into neural networks. 2015.
  34. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
  35. Fetril: Feature translation for exemplar-free class-incremental learning. In Winter Conference on Applications of Computer Vision (WACV), 2023.
  36. icarl: Incremental classifier and representation learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  37. Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 1995.
  38. Experience replay for continual learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  39. Imagenet large scale visual recognition challenge. International journal of computer vision, 2015.
  40. Icicle: Interpretable class incremental continual learning. In International Conference on Computer Vision (ICCV), 2023.
  41. Adversarial training for free! Advances in Neural Information Processing Systems (NeurIPS), 2019.
  42. Prototype reminiscence and augmented asymmetric knowledge aggregation for non-exemplar class-incremental learning. In International Conference on Computer Vision (ICCV), 2023.
  43. Online class-incremental continual learning with adversarial shapley value. In AAAI Conference on Artificial Intelligence, 2021.
  44. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  45. On the importance of cross-task features for class-incremental learning. International Conference on Machine Learning (ICML) Workshops, 2021.
  46. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
  47. Bring evanescent representations to life in lifelong class incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  48. Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019.
  49. The caltech-ucsd birds-200-2011 dataset. 2011.
  50. Foster: Feature boosting and compression for class-incremental learning. In European Conference on Computer Vision (ECCV), 2022a.
  51. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. Advances in Neural Information Processing Systems (NeurIPS), 2023a.
  52. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023b.
  53. Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision (ECCV), 2022b.
  54. Learning to prompt for continual learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022c.
  55. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  56. Semantic drift compensation for class-incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  57. Pycil: a python toolbox for class-incremental learning. SCIENCE CHINA Information Sciences, 2023a.
  58. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023b.
  59. Prototype augmentation and self-supervision for incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  60. Self-sustaining representation expansion for non-exemplar class-incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  61. Self-organizing pathway expansion for non-exemplar class-incremental learning. In International Conference on Computer Vision (ICCV), 2023.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com