Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search (2206.09166v2)

Published 18 Jun 2022 in cs.LG

Abstract: Graph neural architecture search (GraphNAS) has recently aroused considerable attention in both academia and industry. However, two key challenges seriously hinder the further research of GraphNAS. First, since there is no consensus for the experimental setting, the empirical results in different research papers are often not comparable and even not reproducible, leading to unfair comparisons. Secondly, GraphNAS often needs extensive computations, which makes it highly inefficient and inaccessible to researchers without access to large-scale computation. To solve these challenges, we propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS. Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures and propose a principled evaluation protocol. To avoid unnecessary repetitive training, we have trained and evaluated all of these architectures on nine representative graph datasets, recording detailed metrics including train, validation, and test performance in each epoch, the latency, the number of parameters, etc. Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation, which enables fair, fully reproducible, and efficient comparisons. To demonstrate its usage, we make in-depth analyses of our proposed NAS-Bench-Graph, revealing several interesting findings for GraphNAS. We also showcase how the benchmark can be easily compatible with GraphNAS open libraries such as AutoGL and NNI. To the best of our knowledge, our work is the first benchmark for graph neural architecture search.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (94)
  1. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2):405–421, 2021.
  2. Random search for hyper-parameter optimization. Journal of machine learning research, 13(2), 2012.
  3. Graph neural networks with convolutional arma filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  4. Multimodal continual graph learning with neural architecture search. In Proceedings of the ACM Web Conference 2022, pages 1292–1300, 2022.
  5. Rethinking graph neural architecture search from message-passing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6657–6666, 2021.
  6. Nas-bench-zero: A large scale dataset for understanding zero-shot neural architecture search. In OpenReview, 2021.
  7. Graphpas: Parallel architecture search for graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2182–2186, 2021.
  8. Gene Ontology Consortium. The gene ontology resource: 20 years and still going strong. Nucleic acids research, 47(D1):D330–D338, 2019.
  9. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29:3844–3852, 2016.
  10. Learning versatile neural architectures by propagating network codes. arXiv preprint arXiv:2103.13253, 2021.
  11. Diffmg: Differentiable meta graph search for heterogeneous graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 279–288, 2021.
  12. Nats-bench: Benchmarking nas algorithms for architecture topology and size. IEEE transactions on pattern analysis and machine intelligence, 2021.
  13. Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In International Conference on Learning Representations, 2019.
  14. Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5251–5260, 2021.
  15. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1):1997–2017, 2019.
  16. Search for deep graph neural networks. arXiv preprint arXiv:2109.10047, 2021.
  17. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
  18. Graph neural architecture search. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, volume 20, pages 1403–1409, 2020.
  19. Large-scale graph neural architecture search. In International Conference on Machine Learning, pages 7968–7981. PMLR, 2022.
  20. Autoattend: Automated attention representation search. In International Conference on Machine Learning, pages 3864–3874. PMLR, 2021.
  21. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025–1035, 2017.
  22. Nas-hpo-bench-ii: A benchmark dataset on joint optimization of convolutional neural network architecture and training hyperparameters. In Asian Conference on Machine Learning, pages 1349–1364. PMLR, 2021.
  23. Open graph benchmark: Datasets for machine learning on graphs. Neural Information Processing Systems, 2020.
  24. Search to aggregate neighborhood for graph neural network. In 2021 IEEE 37th International Conference on Data Engineering, pages 552–563. IEEE, 2021.
  25. Graph neural network architecture search for molecular property prediction. In 2020 IEEE International Conference on Big Data, pages 1346–1353. IEEE, 2020.
  26. Graph neural network for traffic forecasting: A survey. arXiv preprint arXiv:2101.11174, 2021.
  27. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  28. Tabular benchmarks for joint architecture and hyperparameter optimization. arXiv preprint arXiv:1905.04970, 2019.
  29. Nas-bench-nlp: neural architecture search benchmark for natural language processing. arXiv preprint arXiv:2006.07116, 2020.
  30. Hw-nas-bench: Hardware-aware neural architecture search benchmark. In International Conference on Learning Representations, 2020.
  31. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9267–9276, 2019.
  32. Disentangled contrastive learning on graphs. Advances in Neural Information Processing Systems, 34:21872–21884, 2021.
  33. Disentangled graph contrastive learning with independence promotion. IEEE Transactions on Knowledge and Data Engineering, 2022.
  34. Random search and reproducibility for neural architecture search. In Uncertainty in artificial intelligence, pages 367–377. PMLR, 2020.
  35. One-shot graph neural architecture search with dynamic search space. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8510–8517, 2021.
  36. Best practices for scientific research on neural architecture search. Journal of Machine Learning Research, 21(243):1–18, 2020.
  37. Darts: Differentiable architecture search. In International Conference on Learning Representations, 2018.
  38. Relative and absolute location embedding for few-shot node classification on graph. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 4267–4275, 2021.
  39. On size-oriented long-tailed graph classification of graph neural networks. In Proceedings of the ACM Web Conference 2022, pages 1506–1516, 2022.
  40. Tail-gnn: Tail-node graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1109–1119, 2021.
  41. Fgnas: Fpga-aware graph neural architecture search. OpenReview, 2020.
  42. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52, 2015.
  43. Nas-bench-asr: Reproducible neural architecture search for speech recognition. In International Conference on Learning Representations, 2020.
  44. Microsoft. Neural Network Intelligence, 1 2021.
  45. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
  46. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4602–4609, 2019.
  47. Neural architecture search in graph neural networks. In Brazilian Conference on Intelligent Systems, pages 302–317. Springer, 2020.
  48. Autostg: Neural architecture search for predictions of spatio-temporal graph. In Proceedings of the Web Conference 2021, pages 1846–1855, 2021.
  49. Cautionary note on reporting eta-squared values from multifactor anova designs. Educational and psychological measurement, 64(6):916–924, 2004.
  50. Gqnas: Graph q network for neural architecture search. In 21st IEEE International Conference on Data Mining, 2021.
  51. Graph differentiable architecture search with structure learning. Advances in Neural Information Processing Systems, 34, 2021.
  52. Graph neural architecture search under distribution shifts. In International Conference on Machine Learning, pages 18083–18095. PMLR, 2022.
  53. Large-scale evolution of image classifiers. In International Conference on Machine Learning, pages 2902–2911. PMLR, 2017.
  54. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  55. Pitfalls of graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018, 2018.
  56. Evolutionary architecture search for graph neural networks. arXiv preprint arXiv:2009.10199, 2020.
  57. Graph neural networks in particle physics. Machine Learning: Science and Technology, 2(2):021001, 2020.
  58. Network embedding in biomedical data science. Briefings in bioinformatics, 21(1):182–197, 2020.
  59. Automated graph representation learning for node classification. In 2021 International Joint Conference on Neural Networks, pages 1–7. IEEE, 2021.
  60. String v11: protein–protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic acids research, 47(D1):D607–D613, 2019.
  61. Autone: Hyperparameter optimization for massive network embedding. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 216–225, 2019.
  62. Nas-bench-360: Benchmarking diverse tasks for neural architecture search. In OpenReview, 2021.
  63. Graph attention networks. In International Conference on Learning Representations, 2018.
  64. Xingchen Wan. On redundancy and diversity in cell-based neural architecture search. In Presented at the 10th International Conference on Learning Representations, volume 25, page 29, 2022.
  65. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, 1(1):396–413, 2020.
  66. Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
  67. Heterogeneous graph attention network. In The world wide web conference, pages 2022–2032, 2019.
  68. Explainable automated graph representation learning with hyperparameter importance. In International Conference on Machine Learning, pages 10727–10737. PMLR, 2021.
  69. Autogel: An automated graph neural network with explicit link information. Advances in Neural Information Processing Systems, 34, 2021.
  70. Learn layer-wise connections in graph neural networks. arXiv preprint arXiv:2112.13585, 2021.
  71. Designing the topology of graph neural networks: A novel feature fusion perspective. In Proceedings of the ACM Web Conference 2022, pages 1381–1391, 2022.
  72. Pooling architecture search for graph classification. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2091–2100, 2021.
  73. Graph neural networks in recommender systems: a survey. ACM Computing Surveys (CSUR), 2020.
  74. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
  75. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
  76. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning, pages 5453–5462. PMLR, 2018.
  77. Nas-bench-x11 and the power of learning curves. Advances in Neural Information Processing Systems, 34, 2021.
  78. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40–48. PMLR, 2016.
  79. Nas-bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning, pages 7105–7114. PMLR, 2019.
  80. Design space for graph neural networks. Advances in Neural Information Processing Systems, 33, 2020.
  81. Nas-bench-1shot1: Benchmarking and dissecting one-shot neural architecture search. In International Conference on Learning Representations, 2019.
  82. Surrogate nas benchmarks: Going beyond the limited search spaces of tabular nas benchmarks. In International Conference on Learning Representations, 2021.
  83. Pasca: a graph neural architecture search system under the scalable paradigm. Proceedings of the Web Conference 2022, 2022.
  84. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 2020.
  85. Automated machine learning on graphs: A survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021.
  86. Efficient graph neural architecture search, 2021.
  87. Simplifying architecture search for graph neural network. arXiv preprint arXiv:2008.11652, 2020.
  88. Learned low precision graph neural networks. In The 2nd Workshop on Machine Learning and Systems, 2021.
  89. Probabilistic dual network architecture search on graphs. arXiv preprint arXiv:2003.09676, 2020.
  90. Graph neural networks: A review of methods and applications. AI Open, 1:57–81, 2020.
  91. Auto-gnn: Neural architecture search of graph neural networks. arXiv preprint arXiv:1909.03184, 2019.
  92. Curriculum-nas: Curriculum weight-sharing neural architecture search. In Proceedings of the 30th ACM International Conference on Multimedia, MM ’22, page 6792–6801, 2022.
  93. Automated machine learning and meta-learning for multimedia, 2021.
  94. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017.
Citations (25)

Summary

We haven't generated a summary for this paper yet.