Simple GNNs with Low Rank Non-parametric Aggregators (2310.05250v2)
Abstract: We revisit recent spectral GNN approaches to semi-supervised node classification (SSNC). We posit that state-of-the-art (SOTA) GNN architectures may be over-engineered for common SSNC benchmark datasets (citation networks, page-page networks, etc.). By replacing feature aggregation with a non-parametric learner we are able to streamline the GNN design process and avoid many of the engineering complexities associated with SOTA hyperparameter selection (GNN depth, non-linearity choice, feature dropout probability, etc.). Our empirical experiments suggest conventional methods such as non-parametric regression are well suited for semi-supervised learning on sparse, directed networks and a variety of other graph types commonly found in SSNC benchmarks. Additionally, we bring attention to recent changes in evaluation conventions for SSNC benchmarking and how this may have partially contributed to rising performances over time.
- Matthias Seeger. Learning with labeled and unlabeled data. Technical report, Institute for Adaptive and Neural Computation, University of Edinburgh, 2002. URL https://infoscience.epfl.ch/record/161327.
- Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399–2434, Dec. 2006.
- The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
- Graph attention networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.
- Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=n6jl7fLxrP.
- Revisiting heterophily for graph neural networks. In Advances in Neural Information Processing Systems, volume 35, pages 1362–1375, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/092359ce5cf60a80e882378944bf1be4-Paper-Conference.pdf.
- Simplifying approach to node classification in graph neural networks. Journal of Computational Science, 62:101695, 2022. URL https://www.sciencedirect.com/science/article/pii/S1877750322000990.
- How powerful are spectral graph neural networks. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 23341–23362, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/wang22am.html.
- Multi-Scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 05 2021. URL https://optdoi.org/10.1093/comnet/cnab014.
- A generalized representer theorem. In Computational Learning Theory, pages 416–426, Berlin, Heidelberg, 2001. Springer Berlin Heidelberg. ISBN 978-3-540-44581-4.
- Stochastic blockmodels: First steps. Social Networks, 5(2):109–137, 1983. URL https://www.sciencedirect.com/science/article/pii/0378873383900217.
- Random dot product graph models for social networks. In Algorithms and Models for the Web-Graph, pages 138–149, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. ISBN 978-3-540-77004-6.
- The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013.
- Fourier could be a data scientist: From graph fourier transform to signal processing on graphs. Comptes Rendus Physique, 20(5):474–488, 2019. URL https://www.sciencedirect.com/science/article/pii/S1631070519301094.
- Geom-gcn: Geometric graph convolutional networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=S1e2agrFvS.
- Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html.
- Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448–456, Lille, France, 07–09 Jul 2015. URL https://proceedings.mlr.press/v37/ioffe15.html.
- Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
- Semi-supervised learning using gaussian fields and harmonic functions. In International Conference on Machine Learning, 2003. URL https://api.semanticscholar.org/CorpusID:1052837.
- Collective classification in network data. AI Magazine, 29(3):93, Sep. 2008. URL https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2157.
- Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’09, page 807–816, New York, NY, USA, 2009. ISBN 9781605584959. URL https://optdoi.org/10.1145/1557019.1557108.
- Learning to extract symbolic knowledge from the world wide web. AAAI ’98/IAAI ’98, page 509–516, USA, 1998. ISBN 0262510987.
- Revisiting semi-supervised learning with graph embeddings. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, page 40–48, 2016.
- Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJU4ayYgl.
- Compressed sensing and robust recovery of low rank matrices. In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pages 1043–1047, 2008.
- Efficient matrix sensing using rank-1 gaussian measurements. In International Conference on Algorithmic Learning Theory, 2015. URL https://api.semanticscholar.org/CorpusID:14411417.
- An improved sample complexity for rank-1 matrix sensing, 2023.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.