Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BeGin: Extensive Benchmark Scenarios and An Easy-to-use Framework for Graph Continual Learning (2211.14568v5)

Published 26 Nov 2022 in cs.LG and cs.AI

Abstract: Continual Learning (CL) is the process of learning ceaselessly a sequence of tasks. Most existing CL methods deal with independent data (e.g., images and text) for which many benchmark frameworks and results under standard experimental settings are available. Compared to them, however, CL methods for graph data (graph CL) are relatively underexplored because of (a) the lack of standard experimental settings, especially regarding how to deal with the dependency between instances, (b) the lack of benchmark datasets and scenarios, and (c) high complexity in implementation and evaluation due to the dependency. In this paper, regarding (a) we define four standard incremental settings (task-, class-, domain-, and time-incremental) for node-, link-, and graph-level problems, extending the previously explored scope. Regarding (b), we provide 35 benchmark scenarios based on 24 real-world graphs. Regarding (c), we develop BeGin, an easy and fool-proof framework for graph CL. BeGin is easily extended since it is modularized with reusable modules for data processing, algorithm design, and evaluation. Especially, the evaluation module is completely separated from user code to eliminate potential mistakes. Regarding benchmark results, we cover 3x more combinations of incremental settings and levels of problems than the latest benchmark. All assets for the benchmark framework are publicly available at https://github.com/ShinhwanKang/BeGin.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274–2282, 2012.
  2. Memory aware synapses: Learning what (not) to forget. In ECCV, 2018.
  3. Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction. In NeurIPS, 2020.
  4. A. Bojchevski and S. Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In ICLR, 2018.
  5. Catastrophic forgetting in deep graph networks: an introductory benchmark for graph classification. arXiv preprint arXiv:2103.11750, 2021.
  6. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV, 2018.
  7. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019.
  8. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In KDD, 2019.
  9. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020.
  10. Memory efficient continual learning for neural text classification. arXiv preprint arXiv:2203.04640, 2022.
  11. Graph lifelong learning: A survey. arXiv preprint arXiv:2202.10688, 2022.
  12. M. Fey and J. E. Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
  13. Lifelong learning of graph neural networks for open-world node classification. In IJCNN, 2021.
  14. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
  15. Y. He and B. Sick. Clear: An adaptive continual learning framework for regression tasks. AI Perspectives, 3(1):1–16, 2021.
  16. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018.
  17. Open graph benchmark: datasets for machine learning on graphs. In NeurIPS, 2020.
  18. D. Isele and A. Cosgun. Selective experience replay for lifelong learning. In AAAI, 2018.
  19. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
  20. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  21. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  22. Learning to pool in graph neural networks for extrapolation. arXiv preprint arXiv:2106.06210, 2021.
  23. Rev2: Fraudulent user prediction in rating platforms. In WSDM, 2018.
  24. Edge weight prediction in weighted signed networks. In ICDM, 2016.
  25. G. Landrum et al. Rdkit: Open-source cheminformatics, 2006.
  26. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Transactions on Knowledge and Data Engineering, 2022.
  27. Dgl-lifesci: An open-source toolkit for deep learning on graphs in life science. ACS omega, 6(41):27233–27238, 2021.
  28. Z. Li and D. Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935–2947, 2017.
  29. The clear benchmark: Continual learning on real-world imagery. In NeurIPS, 2021.
  30. Overcoming catastrophic forgetting in graph neural networks. In AAAI, 2021.
  31. V. Lomonaco and D. Maltoni. Core50: a new dataset and benchmark for continuous object recognition. In CORL, 2017.
  32. Avalanche: an end-to-end library for continual learning. In ICCV, 2021.
  33. D. Lopez-Paz and M. Ranzato. Gradient episodic memory for continual learning. In NeurIPS, 2017.
  34. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In ECCV, 2018.
  35. A. Mallya and S. Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In CVPR, 2018.
  36. P. Mernyei and C. Cangea. Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901, 2020.
  37. NYC Taxi & Limousine Commission. Tlc trip record data. https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page.
  38. Dualnet: Continual learning, fast and slow. In NeurIPS, 2021.
  39. icarl: Incremental classifier and representation learning. In CVPR, 2017.
  40. Experience replay for continual learning. In NeurIPS, 2019.
  41. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637, 2020.
  42. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  43. Overcoming catastrophic forgetting with hard attention to the task. In ICML, 2018.
  44. Continual learning with deep generative replay. In NeurIPS, 2017.
  45. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000, 2019.
  46. String v11: protein–protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Research, 47(D1):D607–D613, 2019.
  47. Transfer learning to infer social ties across heterogeneous networks. ACM Transactions on Information Systems, 34(2):1–43, 2016.
  48. L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
  49. Graph attention networks. stat, 1050(20):10–48550, 2017.
  50. Deep graph infomax. ICLR, 2019.
  51. Lifelong graph learning. In CVPR, 2022.
  52. Streaming graph neural networks via continual learning. In CIKM, 2020.
  53. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, 1(1):396–413, 2020.
  54. M. Y. Wang. Deep graph library: Towards efficient and scalable deep learning on graphs. In ICLR workshop on representation learning on graphs and manifolds, 2019.
  55. Towards open-world feature extrapolation: An inductive graph learning approach. In NeurIPS, 2021.
  56. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513–530, 2018.
  57. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. Journal of Medicinal Chemistry, 63(16):8749–8760, 2019.
  58. Inductive representation learning on temporal graphs. In ICLR, 2020.
  59. Roland: graph learning framework for dynamic graphs. In KDD, 2022.
  60. Cglb: Benchmark tasks for continual graph learning. In NeurIPS, 2022.
  61. F. Zhou and C. Cao. Overcoming catastrophic forgetting in graph neural networks with experience replay. In AAAI, 2021.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com