Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Augmented Graph2Graph Memory Interaction for Few Shot Continual Learning (2403.04140v1)

Published 7 Mar 2024 in cs.AI

Abstract: Few-Shot Class-Incremental Learning (FSCIL) has gained considerable attention in recent years for its pivotal role in addressing continuously arriving classes. However, it encounters additional challenges. The scarcity of samples in new sessions intensifies overfitting, causing incompatibility between the output features of new and old classes, thereby escalating catastrophic forgetting. A prevalent strategy involves mitigating catastrophic forgetting through the Explicit Memory (EM), which comprise of class prototypes. However, current EM-based methods retrieves memory globally by performing Vector-to-Vector (V2V) interaction between features corresponding to the input and prototypes stored in EM, neglecting the geometric structure of local features. This hinders the accurate modeling of their positional relationships. To incorporate information of local geometric structure, we extend the V2V interaction to Graph-to-Graph (G2G) interaction. For enhancing local structures for better G2G alignment and the prevention of local feature collapse, we propose the Local Graph Preservation (LGP) mechanism. Additionally, to address sample scarcity in classes from new sessions, the Contrast-Augmented G2G (CAG2G) is introduced to promote the aggregation of same class features thus helps few-shot learning. Extensive comparisons on CIFAR100, CUB200, and the challenging ImageNet-R dataset demonstrate the superiority of our method over existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016.   IEEE Computer Society, 2016, pp. 770–778. [Online]. Available: https://doi.org/10.1109/CVPR.2016.90
  2. K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017.   IEEE Computer Society, 2017, pp. 2980–2988. [Online]. Available: https://doi.org/10.1109/ICCV.2017.322
  3. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  4. G. Saha, I. Garg, and K. Roy, “Gradient projection memory for continual learning,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.   OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=3AOj0RCNC2
  5. Q. Pham, C. Liu, and S. Hoi, “Dualnet: Continual learning, fast and slow,” Advances in Neural Information Processing Systems, vol. 34, pp. 16 131–16 144, 2021.
  6. A. Mallya, D. Davis, and S. Lazebnik, “Piggyback: Adapting a single network to multiple tasks by learning to mask weights,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 67–82.
  7. X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong, “Few-shot class-incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 183–12 192.
  8. A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. Harandi, “Semantic-aware knowledge distillation for few-shot class-incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 2534–2543.
  9. S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 831–839.
  10. M. Hersche, G. Karunaratne, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi, “Constrained few-shot class-incremental learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022.   IEEE, 2022, pp. 9047–9057. [Online]. Available: https://doi.org/10.1109/CVPR52688.2022.00885
  11. A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwinska, S. G. Colmenarejo, E. Grefenstette, T. Ramalho, J. P. Agapiou, A. P. Badia, K. M. Hermann, Y. Zwols, G. Ostrovski, A. Cain, H. King, C. Summerfield, P. Blunsom, K. Kavukcuoglu, and D. Hassabis, “Hybrid computing using a neural network with dynamic external memory,” Nat., vol. 538, no. 7626, pp. 471–476, 2016. [Online]. Available: https://doi.org/10.1038/nature20101
  12. D. Kleyko, G. Karunaratne, J. M. Rabaey, A. Sebastian, and A. Rahimi, “Generalized key-value memory to flexibly adjust redundancy in memory-augmented networks,” IEEE Trans. Neural Networks Learn. Syst., vol. 34, no. 12, pp. 10 993–10 998, 2023. [Online]. Available: https://doi.org/10.1109/TNNLS.2022.3159445
  13. A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, “Meta-learning with memory-augmented neural networks,” in International conference on machine learning.   PMLR, 2016, pp. 1842–1850.
  14. D.-W. Zhou, F.-Y. Wang, H.-J. Ye, L. Ma, S. Pu, and D.-C. Zhan, “Forward compatible few-shot class-incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 9046–9056.
  15. Z. Song, Y. Zhao, Y. Shi, P. Peng, L. Yuan, and Y. Tian, “Learning with fantasy: Semantic-aware virtual contrastive constraint for few-shot class-incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24 183–24 192.
  16. Y. Yang, H. Yuan, X. Li, Z. Lin, P. Torr, and D. Tao, “Neural collapse inspired feature-classifier alignment for few-shot class incremental learning,” arXiv preprint arXiv:2302.03004, 2023.
  17. B. Qi, B. Zhou, W. Zhang, J. Liu, and L. Wu, “Improving robustness of intent detection under adversarial attacks: A geometric constraint perspective,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  18. T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton, “A simple framework for contrastive learning of visual representations,” ArXiv, vol. abs/2002.05709, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:211096730
  19. W. Huang, M. Yi, and X. Zhao, “Towards the generalization of contrastive self-supervised learning,” ArXiv, vol. abs/2111.00743, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:240354511
  20. F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in International conference on machine learning.   PMLR, 2017, pp. 3987–3995.
  21. G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
  22. Z. Li and D. Hoiem, “Learning without forgetting,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2935–2947, 2017.
  23. L. Yu, B. Twardowski, X. Liu, L. Herranz, K. Wang, Y. Cheng, S. Jui, and J. v. d. Weijer, “Semantic drift compensation for class-incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6982–6991.
  24. S. Wang, W. Shi, S. Dong, X. Gao, X. Song, and Y. Gong, “Semantic knowledge guided class-incremental learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 10, pp. 5921–5931, 2023.
  25. F. M. Castro, M. J. Marín-Jiménez, N. Guil, C. Schmid, and K. Alahari, “End-to-end incremental learning,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 233–248.
  26. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
  27. H. Lin, S. Feng, X. Li, W. Li, and Y. Ye, “Anchor assisted experience replay for online class-incremental learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 5, pp. 2217–2232, 2023.
  28. Q. Hu, Y. Gao, and B. Cao, “Curiosity-driven class-incremental learning via adaptive sample selection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 12, pp. 8660–8673, 2022.
  29. H. Liu, X. Zhu, Z. Lei, D. Cao, and S. Z. Li, “Fast adapting without forgetting for face recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 8, pp. 3093–3104, 2020.
  30. Y. Cong, M. Zhao, J. Li, S. Wang, and L. Carin, “Gan memory with no forgetting,” Advances in Neural Information Processing Systems, vol. 33, pp. 16 481–16 494, 2020.
  31. Y. Liu, B. Schiele, and Q. Sun, “Adaptive aggregation networks for class-incremental learning,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2021, pp. 2544–2553.
  32. Q. Qin, W. Hu, H. Peng, D. Zhao, and B. Liu, “Bns: Building network structures dynamically for continual learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 20 608–20 620, 2021.
  33. B. Qi, X. Chen, J. Gao, J. Liu, L. Wu, and B. Zhou, “Interactive continual learning: Fast and slow thinking,” 2024.
  34. C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu, “Few-shot incremental learning with continually evolved classifiers,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 12 455–12 464.
  35. B. Yang, M. Lin, B. Liu, M. Fu, C. Liu, R. Ji, and Q. Ye, “Learnable expansion-and-compression network for few-shot class-incremental learning,” arXiv preprint arXiv:2104.02281, 2021.
  36. S. Dong, X. Hong, X. Tao, X. Chang, X. Wei, and Y. Gong, “Few-shot class-incremental learning via relation knowledge distillation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, 2021, pp. 1255–1263.
  37. L. Zhao, J. Lu, Y. Xu, Z. Cheng, D. Guo, Y. Niu, and X. Fang, “Few-shot class-incremental learning via class-aware bilateral distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11 838–11 847.
  38. K. Zhu, Y. Cao, W. Zhai, J. Cheng, and Z.-J. Zha, “Self-promoted prototype refinement for few-shot class-incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 6801–6810.
  39. G. Shi, J. Chen, W. Zhang, L.-M. Zhan, and X.-M. Wu, “Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima,” Advances in neural information processing systems, vol. 34, pp. 6747–6761, 2021.
  40. D.-W. Zhou, H.-J. Ye, L. Ma, D. Xie, S. Pu, and D.-C. Zhan, “Few-shot class-incremental learning by sampling multi-phase tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  41. C. Guo, Q. Zhao, S. Lyu, B. Liu, C. Wang, L. Chen, and G. Cheng, “Decision boundary optimization for few-shot class-incremental learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3501–3511.
  42. D. Kleyko, G. Karunaratne, J. M. Rabaey, A. Sebastian, and A. Rahimi, “Generalized key-value memory to flexibly adjust redundancy in memory-augmented networks,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  43. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” ArXiv Preprint ArXiv:1609.02907, 2016.
  44. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio’, and Y. Bengio, “Graph attention networks,” ArXiv, vol. abs/1710.10903, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:3292002
  45. L. Zhao and L. Akoglu, “Pairnorm: Tackling oversmoothing in gnns,” arXiv preprint arXiv:1909.12223, 2019.
  46. M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, “Simple and deep graph convolutional networks,” in International Conference on Machine Learning.   PMLR, 2020, pp. 1725–1735.
  47. Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra, “Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks,” in 2022 IEEE International Conference on Data Mining (ICDM).   IEEE, 2022, pp. 1287–1292.
  48. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  49. S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks?” arXiv preprint arXiv:2105.14491, 2021.
  50. A. Bardes, J. Ponce, and Y. LeCun, “Vicreg: Variance-invariance-covariance regularization for self-supervised learning,” in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.   OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=xm6YD62D1Ub
  51. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al., “Matching networks for one shot learning,” Advances in neural information processing systems, vol. 29, 2016.
  52. Z. Chi, L. Gu, H. Liu, Y. Wang, Y. Yu, and J. Tang, “Metafscil: A meta-learning approach for few-shot class incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 14 166–14 175.
  53. H. Liu, L. Gu, Z. Chi, Y. Wang, Y. Yu, J. Chen, and J. Tang, “Few-shot class-incremental learning via entropy-regularized data-free replay,” in European Conference on Computer Vision.   Springer, 2022, pp. 146–162.
  54. Z. Wang, Z. Zhang, C. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. G. Dy, and T. Pfister, “Learning to prompt for continual learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022.   IEEE, 2022, pp. 139–149. [Online]. Available: https://doi.org/10.1109/CVPR52688.2022.00024
  55. Z. Wang, Z. Zhang, S. Ebrahimi, R. Sun, H. Zhang, C. Lee, X. Ren, G. Su, V. Perot, J. G. Dy, and T. Pfister, “Dualprompt: Complementary prompting for rehearsal-free continual learning,” in Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXVI, ser. Lecture Notes in Computer Science, S. Avidan, G. J. Brostow, M. Cissé, G. M. Farinella, and T. Hassner, Eds., vol. 13686.   Springer, 2022, pp. 631–648. [Online]. Available: https://doi.org/10.1007/978-3-031-19809-0_36
  56. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.   OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=YicbFdNTTy
  57. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  58. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
  59. D. Hendrycks, S. Basart, N. Mu, S. Kadavath, F. Wang, E. Dorundo, R. Desai, T. Zhu, S. Parajuli, M. Guo, D. Song, J. Steinhardt, and J. Gilmer, “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021.   IEEE, 2021, pp. 8320–8329. [Online]. Available: https://doi.org/10.1109/ICCV48922.2021.00823
  60. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
Citations (4)

Summary

We haven't generated a summary for this paper yet.