You do not have to train Graph Neural Networks at all on text-attributed graphs (2404.11019v1)
Abstract: Graph structured data, specifically text-attributed graphs (TAG), effectively represent relationships among varied entities. Such graphs are essential for semi-supervised node classification tasks. Graph Neural Networks (GNNs) have emerged as a powerful tool for handling this graph-structured data. Although gradient descent is commonly utilized for training GNNs for node classification, this study ventures into alternative methods, eliminating the iterative optimization processes. We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace. This model constructs a weight matrix to represent each class's node attribute subspace, offering an efficient approach to semi-supervised node classification on TAG. Extensive experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts, demonstrating the possibility of refraining from gradient descent in certain configurations.
- Friends and neighbors on the Web. Social Networks, 25(3):211–230, 2003.
- Simplified Graph Convolution with Heterophily. May 2022.
- Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs, August 2023. arXiv:2307.03393 [cs].
- Learning to construct knowledge bases from the World Wide Web. Artificial Intelligence, 118(1):69–113, April 2000.
- Pure Message Passing Can Estimate Common Neighbor for Link Prediction, October 2023. arXiv:2309.00976 [cs].
- Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. September 2018.
- Neural Message Passing for Quantum Chemistry. CoRR, abs/1704.01212, 2017. arXiv: 1704.01212.
- Inductive Representation Learning on Large Graphs. arXiv:1706.02216 [cs, stat], September 2018. arXiv: 1706.02216.
- Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv:2005.00687 [cs, stat], February 2021. arXiv: 2005.00687.
- Combining Label Propagation and Simple Models out-performs Graph Neural Networks. October 2020.
- You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. November 2022.
- Semi-Supervised Learning with Deep Generative Models, October 2014. arXiv:1406.5298 [cs, stat].
- Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907 [cs, stat], February 2017. arXiv: 1609.02907.
- The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, CIKM ’03, pages 556–559, New York, NY, USA, November 2003. Association for Computing Machinery.
- Distributed Representations of Words and Phrases and their Compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013.
- Revisiting Graph Neural Networks: All We Have is Low-Pass Filters, May 2019. arXiv:1905.09550 [cs, math, stat].
- Revisiting Embeddings for Graph Neural Networks. November 2022.
- Pitfalls of Graph Neural Network Evaluation, June 2019. arXiv:1811.05868 [cs, stat].
- A Survey of Optimization Methods from a Machine Learning Perspective, October 2019. arXiv:1906.06821 [cs, math, stat].
- Graph Attention Networks. arXiv:1710.10903 [cs, stat], February 2018. arXiv: 1710.10903.
- Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation, 2023. _eprint: 2106.10865.
- Simplifying Graph Convolutional Networks. arXiv:1902.07153 [cs, stat], June 2019. arXiv: 1902.07153.
- A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4–24, January 2021. arXiv: 1901.00596.
- Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40–48. PMLR, 2016.
- On the distribution alignment of propagation in graph neural networks. AI Open, 3:218–228, January 2022.
- Predicting missing links via local information. The European Physical Journal B, 71(4):623–630, 2009. Publisher: Springer.
- Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- Semi-supervised learning using Gaussian fields and harmonic functions. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML’03, pages 912–919, Washington, DC, USA, August 2003. AAAI Press.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.