Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoGL: A Library for Automated Graph Learning (2104.04987v4)

Published 11 Apr 2021 in cs.LG and cs.AI

Abstract: Recent years have witnessed an upsurge in research interests and applications of machine learning on graphs. However, manually designing the optimal machine learning algorithms for different graph datasets and tasks is inflexible, labor-intensive, and requires expert knowledge, limiting its adaptivity and applicability. Automated machine learning (AutoML) on graphs, aiming to automatically design the optimal machine learning algorithm for a given graph dataset and task, has received considerable attention. However, none of the existing libraries can fully support AutoML on graphs. To fill this gap, we present Automated Graph Learning (AutoGL), the first dedicated library for automated machine learning on graphs. AutoGL is open-source, easy to use, and flexible to be extended. Specifically, we propose a three-layer architecture, consisting of backends to interface with devices, a complete automated graph learning pipeline, and supported graph applications. The automated machine learning pipeline further contains five functional modules: auto feature engineering, neural architecture search, hyper-parameter optimization, model training, and auto ensemble, covering the majority of existing AutoML methods on graphs. For each module, we provide numerous state-of-the-art methods and flexible base classes and APIs, which allow easy usage and customization. We further provide experimental results to showcase the usage of our AutoGL library. We also present AutoGL-light, a lightweight version of AutoGL to facilitate customizing pipelines and enriching applications, as well as benchmarks for graph neural architecture search. The codes of AutoGL are publicly available at https://github.com/THUMNLab/AutoGL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (95)
  1. Z. Zhang, P. Cui, and W. Zhu, “Deep learning on graphs: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  2. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI open, vol. 1, pp. 57–81, 2020.
  3. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, 2020.
  4. J. Ma, C. Zhou, P. Cui, H. Yang, and W. Zhu, “Learning disentangled representations for recommendation,” in Advances in Neural Information Processing Systems, vol. 32, 2019.
  5. H. Li, X. Wang, Z. Zhang, J. Ma, P. Cui, and W. Zhu, “Intention-aware sequential recommendation with structured intent transition,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  6. X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang, “Community preserving network embedding,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
  7. W. Jiang and J. Luo, “Graph neural network for traffic forecasting: A survey,” Expert Systems with Applications, vol. 207, p. 117921, 2022.
  8. J. Shlomi, P. Battaglia, and J.-R. Vlimant, “Graph neural networks in particle physics,” Machine Learning: Science and Technology, 2020.
  9. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric deep learning: Going beyond euclidean data,” IEEE Signal Process. Mag., vol. 34, no. 4, pp. 18–42, 2017.
  10. C. Su, J. Tong, Y. Zhu, P. Cui, and F. Wang, “Network embedding in biomedical data science,” Briefings in bioinformatics, vol. 21, no. 1, pp. 182–197, 2020.
  11. Y. Bengio, A. Lodi, and A. Prouvost, “Machine learning for combinatorial optimization: a methodological tour d’horizon,” European Journal of Operational Research, 2020.
  12. Z. Zhang, X. Wang, and W. Zhu, “Automated machine learning on graphs: A survey,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021, survey track.
  13. Q. Yao, M. Wang, Y. Chen, W. Dai, Y.-F. Li, W.-W. Tu, Q. Yang, and Y. Yu, “Taking human out of learning applications: A survey on automated machine learning,” arXiv:1810.13306, 2018.
  14. K. Tu, J. Ma, P. Cui, J. Pei, and W. Zhu, “Autone: Hyperparameter optimization for massive network embedding,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 216–225.
  15. Y. Gao, H. Yang, P. Zhang, C. Zhou, and Y. Hu, “Graph neural architecture search,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, 7 2020, pp. 1403–1409.
  16. Y. Gao, P. Zhang, C. Zhou, H. Yang, Z. Li, Y. Hu, and S. Y. Philip, “Hgnas++: efficient architecture search for heterogeneous graph neural networks,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  17. Y. Gao, P. Zhang, H. Yang, C. Zhou, Z. Tian, Y. Hu, Z. Li, and J. Zhou, “Graphnas++: Distributed architecture search for graph neural networks,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  18. C. Wang, B. Chen, G. Li, and H. Wang, “Automated graph neural network search under federated learning framework,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  19. M. Fey and J. E. Lenssen, “Fast graph representation learning with PyTorch Geometric,” in ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
  20. M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, Y. Gai, T. Xiao, T. He, G. Karypis, J. Li, and Z. Zhang, “Deep graph library: A graph-centric, highly-performant package for graph neural networks,” arXiv:1909.01315, 2019.
  21. P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner et al., “Relational inductive biases, deep learning, and graph networks,” arXiv:1806.01261, 2018.
  22. R. Zhu, K. Zhao, H. Yang, W. Lin, C. Zhou, B. Ai, Y. Li, and J. Zhou, “Aligraph: A comprehensive graph neural network platform,” Proc. VLDB Endow., vol. 12, no. 12, p. 2094–2105, 2019.
  23. A. Lerer, L. Wu, J. Shen, T. Lacroix, L. Wehrstedt, A. Bose, and A. Peysakhovich, “PyTorch-BigGraph: A Large-scale Graph Embedding System,” in Proceedings of the 2nd SysML Conference, 2019.
  24. H. Jin, Q. Song, and X. Hu, “Auto-keras: An efficient neural architecture search system,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1946–1956.
  25. M. Feurer, A. Klein, K. Eggensperger, J. T. Springenberg, M. Blum, and F. Hutter, “Auto-sklearn: efficient and robust automated machine learning,” in Automated Machine Learning.   Springer, Cham, 2019, pp. 113–134.
  26. J. Bergstra, D. Yamins, and D. Cox, “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in International conference on machine learning, 2013, pp. 115–123.
  27. Q. Zhang, Z. Han, F. Yang, Y. Zhang, Z. Liu, M. Yang, and L. Zhou, “Retiarii: A deep learning exploratory-training framework,” in 14th {normal-{\{{USENIX}normal-}\}} Symposium on Operating Systems Design and Implementation ({normal-{\{{OSDI}normal-}\}} 20), 2020, pp. 919–936.
  28. W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, “Open graph benchmark: Datasets for machine learning on graphs,” Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020.
  29. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  30. O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann, “Pitfalls of graph neural network evaluation,” Relational Representation Learning Workshop, NeurIPS 2018, 2018.
  31. W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 1025–1035.
  32. C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann, “Tudataset: A collection of benchmark datasets for learning with graphs,” in ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020.
  33. A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch, “Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity,” Journal of medicinal chemistry, vol. 34, no. 2, pp. 786–797, 1991.
  34. K. M. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. J. Smola, and H.-P. Kriegel, “Protein function prediction via graph kernels,” Bioinformatics, vol. 21, pp. i47–i56, 2005.
  35. P. Yanardag and S. Vishwanathan, “Deep graph kernels,” in Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 1365–1374.
  36. R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, “Network motifs: simple building blocks of complex networks,” Science, vol. 298, no. 5594, pp. 824–827, 2002.
  37. Z. Zhang, P. Cui, J. Pei, X. Wang, and W. Zhu, “Eigen-gnn: A graph structure preserving plug-in for gnns,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  38. S. Brin and L. Page, “The anatomy of a large-scale hypertextual web search engine,” Computer networks and ISDN systems, vol. 30, no. 1-7, pp. 107–117, 1998.
  39. G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu, “Lightgbm: A highly efficient gradient boosting decision tree,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
  40. R. A. Rossi, R. Zhou, and N. K. Ahmed, “Deep inductive graph representation learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 3, pp. 438–452, 2018.
  41. A. Tsitsulin, D. Mottin, P. Karras, A. Bronstein, and E. Müller, “Netlsd: hearing the shape of a graph,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2347–2356.
  42. A. A. Hagberg, D. A. Schult, and P. J. Swart, “Exploring network structure, dynamics, and function using networkx,” in Proceedings of the 7th Python in Science Conference, 2008, pp. 11 – 15.
  43. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 169–177.
  44. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples for graph data: deep insights into attack and defense,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 4816–4823.
  45. T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” The Journal of Machine Learning Research, vol. 20, no. 1, pp. 1997–2017, 2019.
  46. C. Guan, X. Wang, and W. Zhu, “Autoattend: Automated attention representation search,” in International conference on machine learning, 2021, pp. 3864–3874.
  47. L. Li and A. Talwalkar, “Random search and reproducibility for neural architecture search,” in Uncertainty in artificial intelligence, 2020, pp. 367–377.
  48. B. Zoph and Q. Le, “Neural architecture search with reinforcement learning,” in International Conference on Learning Representations, 2016.
  49. Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, and K. C. Tan, “A survey on evolutionary neural architecture search,” IEEE transactions on neural networks and learning systems, 2021.
  50. H. Liu, K. Simonyan, and Y. Yang, “Darts: Differentiable architecture search,” in International Conference on Learning Representations, 2019.
  51. H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean, “Efficient neural architecture search via parameters sharing,” in International conference on machine learning, 2018, pp. 4095–4104.
  52. Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and J. Sun, “Single path one-shot neural architecture search with uniform sampling,” in European Conference on Computer Vision, 2020, pp. 544–560.
  53. H. Benmeziane, K. El Maghraoui, H. Ouarnoughi, S. Niar, M. Wistuba, and N. Wang, “Hardware-aware neural architecture search: Survey and taxonomy,” in Thirtieth International Joint Conference on Artificial Intelligence {normal-{\{{IJCAI-21}normal-}\}}, 2021, pp. 4322–4329.
  54. K. Zhou, X. Huang, Q. Song, R. Chen, and X. Hu, “Auto-gnn: Neural architecture search of graph neural networks,” Frontiers in big Data, vol. 5, p. 1029307, 2022.
  55. Y. Qin, X. Wang, Z. Zhang, and W. Zhu, “Graph differentiable architecture search with structure learning,” Advances in neural information processing systems, vol. 34, pp. 16 860–16 872, 2021.
  56. B. Xie, H. Chang, Z. Zhang, X. Wang, D. Wang, Z. Zhang, Z. Ying, and W. Zhu, “Adversarially robust neural architecture search for graph neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
  57. D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley, “Google vizier: A service for black-box optimization,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 1487–1495.
  58. J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization.” Journal of machine learning research, vol. 13, no. 2, 2012.
  59. J. Wu, X.-Y. Chen, H. Zhang, L.-D. Xiong, H. Lei, and S.-H. Deng, “Hyperparameter optimization for machine learning models based on bayesian optimization,” Journal of Electronic Science and Technology, vol. 17, no. 1, pp. 26–40, 2019.
  60. J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for hyper-parameter optimization,” in 25th annual conference on neural information processing systems (NIPS 2011), vol. 24, 2011.
  61. D. V. Arnold and N. Hansen, “Active covariance matrix adaptation for the (1+ 1)-cma-es,” in Proceedings of the 12th annual conference on Genetic and evolutionary computation, 2010, pp. 385–392.
  62. T. Voß, N. Hansen, and C. Igel, “Improved step size adaptation for the mo-cma-es,” in Proceedings of the 12th annual conference on Genetic and evolutionary computation, 2010, pp. 487–494.
  63. P. Bratley, B. L. Fox, and H. Niederreiter, “Programs to generate niederreiter’s low-discrepancy sequences,” ACM Transactions on Mathematical Software (TOMS), vol. 20, no. 4, pp. 494–495, 1994.
  64. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations (ICLR), 2017.
  65. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” International Conference on Learning Representations, 2018.
  66. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in International Conference on Learning Representations, 2019.
  67. H. Gao and S. Ji, “Graph u-nets,” in Proceedings of the 36th International Conference on Machine Learning, 2019.
  68. W. L. Hamilton, R. Ying, and J. Leskovec, “Representation learning on graphs: Methods and applications,” arXiv preprint arXiv:1709.05584, 2017.
  69. M. Zhang, Z. Cui, M. Neumann, and Y. Chen, “An end-to-end deep learning architecture for graph classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  70. Y. Liu, M. Jin, S. Pan, C. Zhou, Y. Zheng, F. Xia, and S. Y. Philip, “Graph self-supervised learning: A survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 6, pp. 5879–5900, 2022.
  71. W. Jin, X. Liu, X. Zhao, Y. Ma, N. Shah, and J. Tang, “Automated self-supervised learning for graphs,” in The 10th International Conference on Learning Representations (ICLR 2022), 2022.
  72. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
  73. L. Sun, Y. Dou, C. Yang, K. Zhang, J. Wang, S. Y. Philip, L. He, and B. Li, “Adversarial attack and defense on graph data: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  74. Y. Li, W. Jin, H. Xu, and J. Tang, “Deeprobust: a platform for adversarial attacks and defenses,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 18, 2021, pp. 16 078–16 080.
  75. X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” Advances in neural information processing systems, vol. 33, pp. 9263–9275, 2020.
  76. M. Jin, H. Chang, W. Zhu, and S. Sojoudi, “Power up! robust graph convolutional network via graph powering,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 9, 2021, pp. 8004–8012.
  77. X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in The world wide web conference, 2019, pp. 2022–2032.
  78. Z. Hu, Y. Dong, K. Wang, and Y. Sun, “Heterogeneous graph transformer,” in Proceedings of the web conference 2020, 2020, pp. 2704–2710.
  79. F. Errica, M. Podda, D. Bacciu, and A. Micheli, “A fair comparison of graph neural networks for graph classification,” in International Conference on Learning Representations, 2020.
  80. Y. Qin, X. Wang, Z. Zhang, P. Xie, and W. Zhu, “Graph neural architecture search under distribution shifts,” in International Conference on Machine Learning, 2022, pp. 18 083–18 095.
  81. C. Guan, X. Wang, H. Chen, Z. Zhang, and W. Zhu, “Large-scale graph neural architecture search,” in International Conference on Machine Learning, 2022, pp. 7968–7981.
  82. Z. Zhang, X. Wang, C. Guan, Z. Zhang, H. Li, and W. Zhu, “Autogt: Automated graph transformer architecture search,” in The Eleventh International Conference on Learning Representations, 2023.
  83. J. Wang, A. Ma, Y. Chang, J. Gong, Y. Jiang, R. Qi, C. Wang, H. Fu, Q. Ma, and D. Xu, “scgnn is a novel graph neural network framework for single-cell rna-seq analyses,” Nature communications, vol. 12, no. 1, p. 1882, 2021.
  84. Y. Wang, J. Wang, Z. Cao, and A. Barati Farimani, “Molecular contrastive learning of representations via graph neural networks,” Nature Machine Intelligence, vol. 4, no. 3, pp. 279–287, 2022.
  85. S. Jiang, S. Qin, R. C. Van Lehn, P. Balaprakash, and V. M. Zavala, “Uncertainty quantification for molecular property predictions with graph neural architecture search,” arXiv preprint arXiv:2307.10438, 2023.
  86. Z. Zhang, X. Wang, Z. Zhang, G. Shen, S. Shen, and W. Zhu, “Unsupervised graph neural architecture search with disentangled self-supervision,” Advances in Neural Information Processing Systems, vol. 36, 2023.
  87. Y. Qin, X. Wang, Z. Zhang, H. Chen, and W. Zhu, “Multi-task graph neural architecture search with task-aware collaboration and curriculum,” Advances in neural information processing systems, vol. 36, 2023.
  88. B. M. Oloulade, J. Gao, J. Chen, R. Al-Sabri, and Z. Wu, “Cancer drug response prediction with surrogate modeling-based graph neural architecture search,” Bioinformatics, vol. 39, no. 8, 2023.
  89. G. Jin, H. Yan, F. Li, Y. Li, and J. Huang, “Dual graph convolution architecture search for travel time estimation,” ACM Transactions on Intelligent Systems and Technology, vol. 14, no. 4, pp. 1–23, 2023.
  90. Y. Qin, Z. Zhang, X. Wang, Z. Zhang, and W. Zhu, “Nas-bench-graph: Benchmarking graph neural architecture search,” in Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
  91. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” Advances in neural information processing systems, vol. 29, 2016.
  92. F. M. Bianchi, D. Grattarola, L. Livi, and C. Alippi, “Graph neural networks with convolutional arma filters,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 7, pp. 3496–3507, 2021.
  93. C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe, “Weisfeiler and leman go neural: Higher-order graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 4602–4609.
  94. K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in International conference on machine learning, 2018, pp. 5453–5462.
  95. G. Li, M. Muller, A. Thabet, and B. Ghanem, “Deepgcns: Can gcns go as deep as cnns?” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9267–9276.
Citations (28)

Summary

  • The paper introduces AutoGL, a library that automates graph learning by integrating AutoML techniques through modular pipelines for feature engineering, neural architecture search, and hyper-parameter optimization.
  • It demonstrates that employing ensemble methods with combined models significantly improves performance across various graph datasets.
  • The system’s modular architecture, compatible with libraries like PyG and DGL, supports diverse applications such as node classification, link prediction, and graph classification.

Automated Graph Learning with AutoGL

The paper focuses on the development of AutoGL, a comprehensive library designed to facilitate automated machine learning (AutoML) on graphs. AutoGL aims to address the challenges associated with designing optimal graph-based machine learning algorithms, which traditionally require significant expert knowledge and manual effort.

Problem Statement and Contribution

The increasing diversity of graph tasks has complicated the manual design of graph algorithms specific to each application. As a solution, AutoGL integrates AutoML techniques with graph machine learning, offering a three-layer architecture comprising backends, automated learning pipelines, and application support for diverse graph tasks.

System Architecture

AutoGL is structured in a modular fashion:

  • Backends: Interfaces with existing graph learning libraries like PyTorch Geometric (PyG) and Deep Graph Library (DGL), ensuring compatibility and leveraging existing graph processing capabilities.
  • Automated Learning Pipeline: Composed of five functional modules:
    • Auto Feature Engineering: Automatically generates, selects, and modifies features to optimize graph learning.
    • Neural Architecture Search (NAS): Searches for optimal architectures using various strategies, including reinforcement learning and differentiation-based methods.
    • Hyper-Parameter Optimization (HPO): Optimizes hyper-parameters for graph models via algorithms like Random Search and TPE.
    • Model Training: Provides modular components for building and training graph models like GCN, GAT, and GraphSAGE.
    • Auto Ensemble: Combines multiple models to enhance prediction performance through methods like voting and stacking.
  • Application Support: Facilitates tasks such as node classification, link prediction, and graph classification, extending support to self-supervised and robust learning.

Key Results and Use Cases

The paper provides an evaluation of AutoGL:

  • Performance: AutoGL demonstrates improved performance across multiple graph datasets, outperforming baseline models due to its automated optimization processes.
  • Graph NAS: The NAS component supports various strategies and achieves competitive results, indicative of robust and effective architecture search mechanisms.
  • Ensemble Effectiveness: Demonstrates enhanced robustness and performance by integrating results from multiple models.

Implications and Future Directions

AutoGL represents a significant step towards democratizing graph machine learning by reducing the required manual effort and expert knowledge. The implications include:

  • Practical Utility: Facilitates applications across various domains without needing deep expertise in graph ML algorithm design.
  • Algorithmic Advances: Encourages exploration of automated graph learning methodologies, potentially leading to innovations in architecture search and ensemble learning.
  • Research Opportunities: Provides a testbed for developing and comparing new methods in graph AutoML, potentially inspiring future research collaborations.

AutoGL's extension, AutoGL-light, focuses on flexibility and ease of use, targeting researchers familiar with PyG, while NAS-Bench-Graph offers a benchmark framework to standardize graph NAS evaluation.

Conclusion

AutoGL and its derivatives mark a notable advancement in automated graph learning, offering strong numerical results and flexibility for further customization and development. Future expansions might involve more sophisticated NAS techniques, broader dataset support, and enhanced integration with emerging graph learning frameworks.

Github Logo Streamline Icon: https://streamlinehq.com