Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks (2305.14065v3)

Published 23 May 2023 in cs.LG and cs.AI

Abstract: Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8\%$ more accurate than the strong baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE TSP, 54(11):4311–4322, 2006.
  2. Simple, efficient, and neural algorithms for sparse coding. In Proc. COLT, pp.  113–149, 2015.
  3. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proc. ICLR, 2020.
  4. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proc. ICLR, 2021.
  5. Understanding and simplifying one-shot architecture search. In Proc. ICML, 2018.
  6. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281–305, 2012.
  7. Stabilizing darts with amended gradient estimation on architectural parameters. CoRR, abs/1910.11831, 2020a.
  8. Gold-nas: Gradual, one-level, differentiable. CoRR, abs/2007.03331, 2020b.
  9. Spectral networks and locally connected networks on graphs. In Proc. ICLR, 2014.
  10. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory (TIT), 52(2):489–509, 2006.
  11. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. arXiv preprint arXiv:2102.11535, 2021.
  12. Stabilizing differentiable architecture search via perturbation-based regularization. In Proc. ICML, 2020.
  13. Convolutional neural networks on graphs with fast localized spectral filtering. In Proc. NIPS, 2016.
  14. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop, 2019.
  15. Fischer, H. A history of the central limit theorem: from classical to modern probability theory. Springer, 2011.
  16. Fu, W. J. Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical Statistics, 7(3):397–416, 1998.
  17. Graph neural architecture search. In Proc. IJCAI, 2020.
  18. An alternate policy gradient estimator for softmax policies. arXiv preprint arXiv:2112.11622, 2021.
  19. Citeseer: An automatic citation indexing system. In Proc of Conference on Digital Libraries, pp.  89–98, 1998.
  20. Implicit bias of gradient descent on linear convolutional networks. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Proc. NIPS, volume 31, 2018.
  21. Powering one-shot topological NAS with stabilized share-parameter proxy. In Proc. ECCV, 2020a.
  22. Single path one-shot neural architecture search with uniform sampling. In Proc. ECCV, 2020b.
  23. Hamilton, W. L. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1–159, 2020.
  24. Inductive representation learning on large graphs. In Proc. NIPS, 2017.
  25. Open graph benchmark: Datasets for machine learning on graphs. Proc. NIPS, 33:22118–22133, 2020.
  26. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4), 1998.
  27. An empirical exploration of recurrent network architectures. In Proc. ICML, 2015.
  28. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  29. Semi-supervised classification with graph convolutional networks. In Proc. ICLR, 2017a.
  30. Semi-supervised classification with graph convolutional networks. In Proc. ICLR, 2017b.
  31. DARTS+: improved differentiable architecture search with early stopping. CoRR, abs/1909.06035, 2019.
  32. DARTS: differentiable architecture search. In Proc. ICLR, 2019a.
  33. Geniepath: Graph neural networks with adaptive receptive paths. In Proc. AAAI, 2019b.
  34. Neural architecture optimization. In Proc. NIPS, 2018.
  35. Image-based recommendations on styles and substitutes. In Proc. SIGIR, pp.  43–52, 2015.
  36. A provable approach for double-sparse coding. In Proc. AAAI, volume 32, 2018.
  37. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37(23):3311–3325, 1997.
  38. Pytorch: An imperative style, high-performance deep learning library. Proc. NIPS, 32, 2019.
  39. Nas-bench-graph: Benchmarking graph neural architecture search. arXiv preprint arXiv:2206.09166, 2022.
  40. Lasso regression. Journal of British Surgery, 105(10):1348–1348, 2018.
  41. Optimization methods for l1-regularization. University of British Columbia, Technical Report TR-2009-19, 2009.
  42. Collective classification in network data. AI Magazine, 29(3):93, 2008.
  43. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17):2246–2253, 2003.
  44. Nasi: Label-and data-agnostic neural architecture search at initialization. arXiv preprint arXiv:2109.00817, 2021.
  45. K-shot NAS: learnable weight-sharing for NAS with k-shot supernets. In Proc. ICML, 2021.
  46. ter Haar, D. Mathematical foundations of statistical mechanics. a. i. khinchin. Science, 110(2865):570–570, 1949.
  47. Tropp, J. Greed is good: algorithmic results for sparse approximation. IEEE Transactions on Information Theory (TIT), 50(10):2231–2242, 2004.
  48. Graph attention networks. In Proc. ICLR, 2018.
  49. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proc. CVPR, 2019a.
  50. Simplifying graph convolutional networks. In Proc. ICML, pp.  6861–6871, 2019b.
  51. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4–24, 2021.
  52. Genetic CNN. In Proc. ICCV, 2017.
  53. Weight-sharing neural architecture search: A battle to shrink the optimization gap. ACM Computing Surveys, 54(9):183:1–183:37, 2022.
  54. Representation learning on graphs with jumping knowledge networks. In Proc. ICML, volume 80, pp.  5453–5462, 2018.
  55. How powerful are graph neural networks? In Proc. ICML, 2019.
  56. Design space for graph neural networks. Proc. NIPS, 33, 2020.
  57. Hyper-parameter optimization: A review of algorithms and applications. CoRR, abs/2003.05689, 2020.
  58. Understanding and robustifying differentiable architecture search. In Proc. ICLR, 2020.
  59. You only search once: Single shot neural architecture search via direct sparse optimization. IEEE TPAMI, 43(9):2891–2904, 2021a.
  60. A survey of sparse representation: algorithms and applications. IEEE Access, 3:490–530, 2015.
  61. Automated machine learning on graphs: A survey. In Proc. IJCAI, 2021b.
  62. Efficient graph neural architecture search, 2021a. URL https://openreview.net/forum?id=IjIzIOkK2D6.
  63. Search to aggregate neighborhood for graph neural network. In Proc. ICDE, 2021b.
  64. Probabilistic dual network architecture search on graphs. arXiv preprint arXiv:2003.09676, 2020.
  65. Auto-gnn: Neural architecture search of graph neural networks. CoRR, abs/1909.03184, 2019.
  66. Neural architecture search with reinforcement learning. In Proc. ICLR, 2017.
Citations (9)

Summary

We haven't generated a summary for this paper yet.