Self-Explainable Graph Transformer for Link Sign Prediction (2408.08754v2)
Abstract: Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since post-hoc explanations are not derived directly from the models, they may be biased and misrepresent the true explanations. Therefore, in this paper we introduce a Self-Explainable Signed Graph transformer (SE-SGformer) framework, which can not only outputs explainable information while ensuring high prediction accuracy. Specifically, We propose a new Transformer architecture for signed graphs and theoretically demonstrate that using positional encoding based on signed random walks has greater expressive power than current SGNN methods and other positional encoding graph Transformer-based approaches. We constructs a novel explainable decision process by discovering the $K$-nearest (farthest) positive (negative) neighbors of a node to replace the neural network-based decoder for predicting edge signs. These $K$ positive (negative) neighbors represent crucial information about the formation of positive (negative) edges between nodes and thus can serve as important explanatory information in the decision-making process. We conducted experiments on several real-world datasets to validate the effectiveness of SE-SGformer, which outperforms the state-of-the-art methods by improving 2.2\% prediction accuracy and 73.1\% explainablity accuracy in the best-case scenario.
- Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686.
- Discovering polarized communities in signed networks. In Proceedings of the 28th acm international conference on information and knowledge management, 961–970.
- Structure-aware transformer for graph representation learning. In International Conference on Machine Learning, 3469–3489. PMLR.
- SIGformer: Sign-aware Graph Transformer for Recommendation. arXiv preprint arXiv:2404.11982.
- ” Bridge” Enhanced Signed Directed Network Embedding. In Proceedings of the 27th acm international conference on information and knowledge management, 773–782.
- Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 302–311.
- Self-Interpretable Graph Learning with Sufficient and Necessary Explanations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, 11749–11756.
- Signed graph convolutional networks. In 2018 IEEE International Conference on Data Mining (ICDM), 929–934. IEEE.
- Signed graph attention networks. In International Conference on Artificial Neural Networks, 566–577. Springer.
- SDGNN: Learning Node Representation for Signed Directed Networks. arXiv preprint arXiv:2101.02390.
- Rose: Role-based signed network embedding. In Proceedings of The Web Conference 2020, 2782–2788.
- Personalized ranking in signed networks using signed random walk with restart. In 2016 IEEE 16th International Conference on Data Mining (ICDM), 973–978. IEEE.
- Signed Graph Diffusion Network. arXiv preprint arXiv:2012.14191.
- Side: representation learning in signed directed networks. In Proceedings of the 2018 World Wide Web Conference, 509–518.
- Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
- Learning signed network embedding via graph attention. In Proceedings of the AAAI Conference on Artificial Intelligence, 4772–4779.
- Parameterized explainer for graph neural network. Advances in neural information processing systems, 33: 19620–19631.
- Enhancing student performance prediction on learnersourced questions with sgnn-llm synergy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, 23232–23240.
- Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10772–10781.
- Self-supervised graph transformer on large-scale molecular data. Advances in neural information processing systems, 33: 12559–12571.
- Higher-order explanations of graph neural networks via relevant walks. IEEE transactions on pattern analysis and machine intelligence, 44(11): 7581–7596.
- Interpretable prototype-based graph information bottleneck. Advances in Neural Information Processing Systems, 36.
- SGCL: Contrastive Representation Learning for Signed Graphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 1671–1680.
- Node classification in signed social networks. In Proceedings of the 2016 SIAM international conference on data mining, 54–62. SIAM.
- Attention is all you need. Advances in neural information processing systems, 30.
- Graph attention networks. arXiv preprint arXiv:1710.10903.
- Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. Advances in neural information processing systems, 33: 12225–12235.
- Shine: Signed heterogeneous information network embedding for sentiment link prediction. In Proceedings of the eleventh ACM international conference on web search and data mining, 592–600.
- Do transformers really perform badly for graph representation? Advances in neural information processing systems, 34: 28877–28888.
- Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32.
- Graph information bottleneck for subgraph recognition. arXiv preprint arXiv:2010.05563.
- Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 430–438.
- Explainability in graph neural networks: A taxonomic survey. IEEE transactions on pattern analysis and machine intelligence, 45(5): 5782–5799.
- SNE: signed network embedding. In Pacific-Asia conference on knowledge discovery and data mining, 183–195. Springer.
- Trustworthy graph neural networks: aspects, methods, and trends. Proceedings of the IEEE.
- Contrastive Learning for Signed Bipartite Graphs. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1629–1638.
- RSGNN: A Model-agnostic Approach for Enhancing the Robustness of Signed Graph Neural Networks. In Proceedings of the ACM Web Conference 2023, 60–70.
- Protgnn: Towards self-explaining graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 9127–9135.
- On structural expressive power of graph transformers. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 3628–3637.