Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Continual Learning with Debiased Lossless Memory Replay (2404.10984v2)

Published 17 Apr 2024 in cs.LG

Abstract: Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks. Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks. In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample nodes/edges of previous graphs to construct the memory, DeLoMe learns small lossless synthetic node representations as the memory. The learned memory can not only preserve the graph data privacy but also capture the holistic graph information, for which the sampling-based methods are not viable. Further, prior methods suffer from bias toward the current task due to the data imbalance between the classes in the memory data and the current data. A debiased GCL loss function is devised in DeLoMe to effectively alleviate this bias. Extensive experiments on four graph datasets show the effectiveness of DeLoMe under both class- and task-incremental learning settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), pages 139–154, 2018.
  2. Graph condensation for inductive node representation learning. arXiv preprint arXiv:2307.15967, 2023.
  3. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
  4. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 220–221, 2020.
  5. Dynamically expandable graph convolution for streaming recommendation. In Proceedings of the ACM Web Conference 2023, pages 1457–1467, 2023.
  6. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020.
  7. Condensing graphs via one-step gradient matching. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 720–730, 2022.
  8. Graph condensation for graph neural networks. In International Conference on Learning Representations, 2022.
  9. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2016.
  10. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  11. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
  12. Overcoming catastrophic forgetting in graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 8653–8661, 2021.
  13. Graph condensation via receptive field distribution matching. arXiv preprint arXiv:2206.13697, 2022.
  14. Cat: Balanced continual graph learning with graph condensation. arXiv preprint arXiv:2309.09455, 2023.
  15. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.
  16. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127–163, 2000.
  17. Long-tail learning via logit adjustment. In International Conference on Learning Representations, 2021.
  18. Learning on streaming graphs with experience replay. In Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, pages 470–478, 2022.
  19. Reinforced continual learning for graphs. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 1666–1674, 2022.
  20. An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web, pages 243–246, 2015.
  21. Towards robust graph incremental learning on evolving graphs. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 32728–32748. PMLR, 23–29 Jul 2023.
  22. Self-supervised continual graph learning in adaptive riemannian spaces. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 4633–4642, 2023.
  23. Lifelong graph learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13719–13728, 2022.
  24. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023.
  25. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019.
  26. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4–24, 2020.
  27. Striking a balance between stability and plasticity for class-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1124–1133, 2021.
  28. Graph learning: A survey. IEEE Transactions on Artificial Intelligence, 2(2):109–127, 2021.
  29. Cglb: Benchmark tasks for continual graph learning. Advances in Neural Information Processing Systems, 35:13006–13021, 2022.
  30. Hierarchical prototype networks for continual graph representation learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4622–4636, 2022.
  31. Sparsified subgraph memory for continual graph representation learning. In 2022 IEEE International Conference on Data Mining (ICDM), pages 1335–1340. IEEE, 2022.
  32. Continual learning on dynamic graphs via parameter isolation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’23, page 601–611, New York, NY, USA, 2023. Association for Computing Machinery.
  33. Ricci curvature-based graph sparsification for continual graph representation learning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  34. Overcoming catastrophic forgetting in graph neural networks with experience replay. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4714–4722, 2021.
Citations (2)

Summary

We haven't generated a summary for this paper yet.