Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Robust Graph Incremental Learning on Evolving Graphs (2402.12987v1)

Published 20 Feb 2024 in cs.LG

Abstract: Incremental learning is a machine learning approach that involves training a model on a sequence of tasks, rather than all tasks at once. This ability to learn incrementally from a stream of tasks is crucial for many real-world applications. However, incremental learning is a challenging problem on graph-structured data, as many graph-related problems involve prediction tasks for each individual node, known as Node-wise Graph Incremental Learning (NGIL). This introduces non-independent and non-identically distributed characteristics in the sample data generation process, making it difficult to maintain the performance of the model as new tasks are added. In this paper, we focus on the inductive NGIL problem, which accounts for the evolution of graph structure (structural shift) induced by emerging tasks. We provide a formal formulation and analysis of the problem, and propose a novel regularization-based technique called Structural-Shift-Risk-Mitigation (SSRM) to mitigate the impact of the structural shift on catastrophic forgetting of the inductive NGIL problem. We show that the structural shift can lead to a shift in the input distribution for the existing tasks, and further lead to an increased risk of catastrophic forgetting. Through comprehensive empirical studies with several benchmark datasets, we demonstrate that our proposed method, Structural-Shift-Risk-Mitigation (SSRM), is flexible and easy to adapt to improve the performance of state-of-the-art GNN incremental learning frameworks in the inductive setting.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Structure aware experience replay for incremental learning in graph-based recommender systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp.  2832–2836, 2021.
  2. Gradient based sample selection for online continual learning. Advances in neural information processing systems, 32, 2019.
  3. Fildne: a framework for incremental learning of dynamic networks embeddings. Knowledge-Based Systems, 236:107453, 2022.
  4. Continual lifelong learning in natural language processing: A survey. arXiv preprint arXiv:2012.09823, 2020.
  5. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017.
  6. Online learned continual compression with adaptive quantization modules. In International conference on machine learning, pp.  1240–1250. PMLR, 2020.
  7. Multimodal continual graph learning with neural architecture search. In Proceedings of the ACM Web Conference 2022, pp.  1292–1300, 2022.
  8. Catastrophic forgetting in deep graph networks: an introductory benchmark for graph classification. arXiv preprint arXiv:2103.11750, 2021.
  9. Online continual learning from imbalanced data. In International Conference on Machine Learning, pp.  1952–1961. PMLR, 2020.
  10. Continual learning of knowledge graph embeddings. IEEE Robotics and Automation Letters, 6(2):1128–1135, 2021.
  11. A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  12. Contextual stochastic block models. Advances in Neural Information Processing Systems, 31, 2018.
  13. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015.
  14. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pp.  3762–3773. PMLR, 2020.
  15. Incremental learning on growing graphs. openReview preprint https://openreview.net/forum?id=nySHNUlKTVw, 2020.
  16. Lifelong learning of graph neural networks for open-world node classification. In 2021 International Joint Conference on Neural Networks (IJCNN), pp.  1–8. IEEE, 2021.
  17. A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1–37, 2014.
  18. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
  19. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp.  1025–1035, 2017.
  20. Graph neural networks with continual learning for fake news detection from social media. arXiv preprint arXiv:2007.03316, 2020.
  21. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020.
  22. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122, 2016.
  23. Dygrain: An incremental learning framework for dynamic graphs. In 31st International Joint Conference on Artificial Intelligence, IJCAI 2022, pp.  3157–3163. International Joint Conferences on Artificial Intelligence, 2022.
  24. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  25. Optimal continual learning has perfect memory and is np-hard. In International Conference on Machine Learning, pp.  5327–5337. PMLR, 2020.
  26. Disentangle-based continual graph representation learning. arXiv preprint arXiv:2010.02565, 2020.
  27. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
  28. Overcoming catastrophic forgetting in graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  8653–8661, 2021.
  29. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.
  30. Streaming graph neural networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.  719–728, 2020.
  31. Continuous-time dynamic network embeddings. In Companion proceedings of the the web conference 2018, pp.  969–976, 2018.
  32. Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71, 2019.
  33. A comprehensive, application-oriented study of catastrophic forgetting in dnns. arXiv preprint arXiv:1905.08101, 2019.
  34. A survey on domain adaptation theory: learning bounds and theoretical guarantees. arXiv preprint arXiv:2004.11829, 2020.
  35. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
  36. Gradient projection memory for continual learning. arXiv preprint arXiv:2103.09762, 2021.
  37. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017.
  38. Towards robust inductive graph incremental learning via experience replay, 2023.
  39. Pres: Toward scalable memory-based dynamic graph neural networks. arXiv preprint arXiv:2402.04284, 2024.
  40. Graph few-shot class-incremental learning. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp.  987–996, 2022.
  41. Bridging graph network to lifelong learning with feature interaction. 2020a.
  42. Lifelong graph learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  13719–13728, 2022.
  43. Streaming graph neural networks via continual learning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp.  1515–1524, 2020b.
  44. Supermasks in superposition. Advances in Neural Information Processing Systems, 33:15173–15184, 2020.
  45. Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  374–382, 2019.
  46. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
  47. Incremental learning via rate reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1125–1133, 2021.
  48. Graphsail: Graph structure aware incremental learning for recommender systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp.  2861–2868, 2020.
  49. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547, 2017.
  50. Scalable and order-robust continual learning with additive parameter decomposition. arXiv preprint arXiv:1902.09432, 2019.
  51. Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  2672–2681, 2018.
  52. Hierarchical prototype networks for continual graph representation learning. arXiv preprint arXiv:2111.15422, 2021.
  53. Cglb: Benchmark tasks for continual graph learning. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
  54. Overcoming catastrophic forgetting in graph neural networks with experience replay. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  4714–4722, 2021.
  55. Zliobaite, I. Learning under concept drift: an overview. arXiv preprint arXiv:1010.4784, 41, 2010.
Citations (15)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets