On the Power of Graph Neural Networks and Feature Augmentation Strategies to Classify Social Networks (2401.06048v1)
Abstract: This paper studies four Graph Neural Network architectures (GNNs) for a graph classification task on a synthetic dataset created using classic generative models of Network Science. Since the synthetic networks do not contain (node or edge) features, five different augmentation strategies (artificial feature types) are applied to nodes. All combinations of the 4 GNNs (GCN with Hierarchical and Global aggregation, GIN and GATv2) and the 5 feature types (constant 1, noise, degree, normalized degree and ID -- a vector of the number of cycles of various lengths) are studied and their performances compared as a function of the hidden dimension of artificial neural networks used in the GNNs. The generalisation ability of these models is also analysed using a second synthetic network dataset (containing networks of different sizes).Our results point towards the balanced importance of the computational power of the GNN architecture and the the information level provided by the artificial features. GNN architectures with higher computational power, like GIN and GATv2, perform well for most augmentation strategies. On the other hand, artificial features with higher information content, like ID or degree, not only consistently outperform other augmentation strategies, but can also help GNN architectures with lower computational power to achieve good performance.
- “Statistical mechanics of complex networks” In Reviews of modern physics 74.1 APS, 2002, pp. 47
- Shaked Brody, Uri Alon and Eran Yahav “How Attentive are Graph Attention Networks?” In International Conference on Learning Representations, 2022
- “Convolutional networks on graphs for learning molecular fingerprints” In Advances in neural information processing systems 28, 2015
- “On random graphs I” In Publ. math. debrecen 6.290-297, 1959, pp. 18
- “A Fair Comparison of Graph Neural Networks for Graph Classification” In International Conference on Learning Representations, 2020
- “Graph u-nets” In international conference on machine learning, 2019, pp. 2083–2092 PMLR
- “Benchmarks for Graph Embedding Evaluation” In arXiv preprint arXiv:1908.06543, 2019
- “Ahp: Learning to negative sample for hyperedge prediction” In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2237–2242
- Thomas N. Kipf and Max Welling “Semi-Supervised Classification with Graph Convolutional Networks” In International Conference on Learning Representations, 2017
- Junhyun Lee, Inyeop Lee and Jaewoo Kang “Self-attention graph pooling” In International conference on machine learning, 2019, pp. 3734–3743 PMLR
- “Graph convolutional networks with eigenpooling” In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 723–731
- Mark EJ Newman and Duncan J Watts “Scaling and percolation in the small-world network model” In Physical review E 60.6 APS, 1999, pp. 7332
- “Graph Attention Networks” In International Conference on Learning Representations, 2018
- “How Powerful are Graph Neural Networks?” In International Conference on Learning Representations, 2019
- “Identity-aware graph neural networks” In Proceedings of the AAAI conference on artificial intelligence 35.12, 2021, pp. 10737–10745
- “An end-to-end deep learning architecture for graph classification” In Proceedings of the AAAI conference on artificial intelligence 32.1, 2018
- “Image super-resolution using very deep residual channel attention networks” In Proceedings of the European conference on computer vision (ECCV), 2018, pp. 286–301
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.