Conditional Shift-Robust Conformal Prediction for Graph Neural Network (2405.11968v2)
Abstract: Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data. Despite their efficacy, a significant drawback of GNNs lies in their limited ability to provide robust uncertainty estimates, posing challenges to their reliability in contexts where errors carry significant consequences. Moreover, GNNs typically excel in in-distribution settings, assuming that training and test data follow identical distributions a condition often unmet in real world graph data scenarios. In this article, we leverage conformal prediction, a widely recognized statistical technique for quantifying uncertainty by transforming predictive model outputs into prediction sets, to address uncertainty quantification in GNN predictions amidst conditional shift\footnote{Representing the change in conditional probability distribution (P(label|input)) from source domain to target domain.} in graph-based semi-supervised learning (SSL). Additionally, we propose a novel loss function aimed at refining model predictions by minimizing conditional shift in latent stages. Termed Conditional Shift Robust (CondSR) conformal prediction for GNNs, our approach CondSR is model-agnostic and adaptable to various classification models. We validate the effectiveness of our method on standard graph benchmark datasets, integrating it with state-of-the-art GNNs in node classification tasks. Comprehensive evaluations demonstrate that our approach consistently achieves any predefined target marginal coverage, enhances the accuracy of state of the art GNN models by up to 12\% under conditional shift, and reduces the prediction set size by up to 48\%. The code implementation is publicly available for further exploration and experimentation.
- S. Ranshous, S. Shen, D. Koutra, S. Harenberg, C. Faloutsos, and N. F. Samatova, “Anomaly detection in dynamic networks: a survey,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 7, no. 3, pp. 223–247, 2015.
- J. Leskovec and J. Mcauley, “Learning to discover social circles in ego networks,” Advances in neural information processing systems, vol. 25, 2012.
- M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” Advances in neural information processing systems, vol. 29, 2016.
- J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in International conference on machine learning. PMLR, 2017, pp. 1263–1272.
- W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
- Z. Chen, X. Li, and J. Bruna, “Supervised community detection with line graph neural networks,” arXiv preprint arXiv:1705.08415, 2017.
- S. Min, Z. Gao, J. Peng, L. Wang, K. Qin, and B. Fang, “Stgsn—a spatial–temporal graph neural network framework for time-evolving social networks,” Knowledge-Based Systems, vol. 214, p. 106746, 2021.
- Y. Wang, Y. Zhao, Y. Zhang, and T. Derr, “Collaboration-aware graph convolutional networks for recommendation systems,” arXiv preprint arXiv:2207.06221, 2022.
- C. Gao, X. Wang, X. He, and Y. Li, “Graph neural networks for recommender system,” in Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022, pp. 1623–1625.
- Y. Chu, J. Yao, C. Zhou, and H. Yang, “Graph neural networks in modern recommender systems,” Graph Neural Networks: Foundations, Frontiers, and Applications, pp. 423–445, 2022.
- H. Chen, C.-C. M. Yeh, F. Wang, and H. Yang, “Graph neural transport networks with non-local attentions for recommender systems,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 1955–1964.
- C. Gao, Y. Zheng, N. Li, Y. Li, Y. Qin, J. Piao, Y. Quan, J. Chang, D. Jin, X. He et al., “A survey of graph neural networks for recommender systems: Challenges, methods, and directions,” ACM Transactions on Recommender Systems, vol. 1, no. 1, pp. 1–51, 2023.
- P. Bongini, M. Bianchini, and F. Scarselli, “Molecular generative graph neural networks for drug discovery,” Neurocomputing, vol. 450, pp. 242–252, 2021.
- K. Han, B. Lakshminarayanan, and J. Liu, “Reliable graph neural networks for drug discovery under distributional shift,” arXiv preprint arXiv:2111.12951, 2021.
- M. Lino, S. Fotiadis, A. A. Bharath, and C. D. Cantwell, “Multi-scale rotation-equivariant graph neural networks for unsteady eulerian fluid dynamics,” Physics of Fluids, vol. 34, no. 8, 2022.
- Z. Li and A. B. Farimani, “Graph neural network-accelerated lagrangian fluid simulation,” Computers & Graphics, vol. 103, pp. 201–211, 2022.
- Y. Yan, G. Li et al., “Size generalizability of graph neural networks on biological data: Insights and practices from the spectral perspective,” arXiv preprint arXiv:2305.15611, 2023.
- B. Jing, S. Eismann, P. N. Soni, and R. O. Dror, “Equivariant graph neural networks for 3d macromolecular structure,” arXiv preprint arXiv:2106.03843, 2021.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
- Q. Zhu, N. Ponomareva, J. Han, and B. Perozzi, “Shift-robust gnns: Overcoming the limitations of localized graph training data,” Advances in Neural Information Processing Systems, vol. 34, 2021.
- J. Gasteiger, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” in International Conference on Learning Representations (ICLR), 2019.
- F. Wang, Y. Liu, K. Liu, Y. Wang, S. Medya, and P. S. Yu, “Uncertainty in graph neural networks: A survey,” arXiv preprint arXiv:2403.07185, 2024.
- H. H.-H. Hsu, Y. Shen, C. Tomani, and D. Cremers, “What makes graph neural networks miscalibrated?” Advances in Neural Information Processing Systems, vol. 35, pp. 13 775–13 786, 2022.
- J. Zhang, B. Kailkhura, and T. Y.-J. Han, “Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning,” in International conference on machine learning. PMLR, 2020, pp. 11 117–11 128.
- B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” Advances in neural information processing systems, vol. 30, 2017.
- A. Angelopoulos, S. Bates, J. Malik, and M. I. Jordan, “Uncertainty sets for image classifiers using conformal prediction,” arXiv preprint arXiv:2009.14193, 2020.
- G. Shafer and V. Vovk, “A tutorial on conformal prediction.” Journal of Machine Learning Research, vol. 9, no. 3, 2008.
- A. N. Angelopoulos and S. Bates, “A gentle introduction to conformal prediction and distribution-free uncertainty quantification,” arXiv preprint arXiv:2107.07511, 2021.
- L. Lei and E. J. Candès, “Conformal inference of counterfactuals and individual treatment effects,” Journal of the Royal Statistical Society Series B: Statistical Methodology, vol. 83, no. 5, pp. 911–938, 2021.
- B. Kompa, J. Snoek, and A. L. Beam, “Second opinion needed: communicating uncertainty in medical machine learning,” NPJ Digital Medicine, vol. 4, no. 1, p. 4, 2021.
- J. M. Dolezal, A. Srisuwananukorn, D. Karpeyev, S. Ramesh, S. Kochanny, B. Cody, A. S. Mansfield, S. Rakshit, R. Bansal, M. C. Bois et al., “Uncertainty-informed deep learning models enable high-confidence predictions for digital histopathology,” Nature communications, vol. 13, no. 1, p. 6572, 2022.
- M. Chua, D. Kim, J. Choi, N. G. Lee, V. Deshpande, J. Schwab, M. H. Lev, R. G. Gonzalez, M. S. Gee, and S. Do, “Tackling prediction uncertainty in machine learning for healthcare,” Nature Biomedical Engineering, vol. 7, no. 6, pp. 711–718, 2023.
- S. Bates, A. Angelopoulos, L. Lei, J. Malik, and M. Jordan, “Distribution-free, risk-controlling prediction sets,” Journal of the ACM (JACM), vol. 68, no. 6, pp. 1–34, 2021.
- A. N. Angelopoulos, S. Bates, A. Fisch, L. Lei, and T. Schuster, “Conformal risk control,” arXiv preprint arXiv:2208.02814, 2022.
- V. Quach, A. Fisch, T. Schuster, A. Yala, J. H. Sohn, T. S. Jaakkola, and R. Barzilay, “Conformal language modeling,” arXiv preprint arXiv:2306.10193, 2023.
- N. Deutschmann, M. Alberts, and M. R. Martínez, “Conformal autoregressive generation: Beam search with coverage guarantees,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 10, 2024, pp. 11 775–11 783.
- I. Gibbs and E. Candes, “Adaptive conformal inference under distribution shift,” Advances in Neural Information Processing Systems, vol. 34, pp. 1660–1672, 2021.
- M. Zaffran, O. Féron, Y. Goude, J. Josse, and A. Dieuleveut, “Adaptive conformal predictions for time series,” in International Conference on Machine Learning. PMLR, 2022, pp. 25 834–25 866.
- S. Nowozin, C. H. Lampert et al., “Structured learning and prediction in computer vision,” Foundations and Trends® in Computer Graphics and Vision, vol. 6, no. 3–4, pp. 185–365, 2011.
- S. H. Zargarbashi, S. Antonelli, and A. Bojchevski, “Conformal prediction sets for graph neural networks,” in International Conference on Machine Learning. PMLR, 2023, pp. 12 292–12 318.
- Y. Romano, M. Sesia, and E. Candes, “Classification with valid and adaptive coverage,” Advances in Neural Information Processing Systems, vol. 33, pp. 3581–3591, 2020.
- K. Huang, Y. Jin, E. Candes, and J. Leskovec, “Uncertainty quantification over graph with conformalized graph neural networks,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- R. J. Tibshirani, R. Foygel Barber, E. Candes, and A. Ramdas, “Conformal prediction under covariate shift,” Advances in neural information processing systems, vol. 32, 2019.
- I. Gibbs and E. Candès, “Conformal inference for online prediction with arbitrary distribution shifts,” arXiv preprint arXiv:2208.08401, 2022.
- V. Plassier, M. Makni, A. Rubashevskii, E. Moulines, and M. Panov, “Conformal prediction for federated uncertainty quantification under label shift,” in International Conference on Machine Learning. PMLR, 2023, pp. 27 907–27 947.
- H. Li, X. Wang, Z. Zhang, and W. Zhu, “Out-of-distribution generalization on graphs: A survey,” arXiv preprint arXiv:2202.07987, 2022.
- H. Li, Z. Zhang, X. Wang, and W. Zhu, “Learning invariant graph representations for out-of-distribution generalization,” in Advances in Neural Information Processing Systems, 2022.
- W. Ju, S. Yi, Y. Wang, Z. Xiao, Z. Mao, H. Li, Y. Gu, Y. Qin, N. Yin, S. Wang et al., “A survey of graph neural networks in real world: Imbalance, noise, privacy and ood challenges,” arXiv preprint arXiv:2403.04468, 2024.
- M. Wu, X. Zheng, Q. Zhang, X. Shen, X. Luo, X. Zhu, and S. Pan, “Graph learning under distribution shifts: A comprehensive survey on domain adaptation, out-of-distribution, and continual learning,” arXiv preprint arXiv:2402.16374, 2024.
- A. Akansha, “Addressing the impact of localized training data in graph neural networks,” in 2023 7th International Conference on Computer Applications in Electrical Engineering-Recent Advances (CERA). IEEE, 2023, pp. 1–6.
- Q. Zhu, Y. Jiao, N. Ponomareva, J. Han, and B. Perozzi, “Explaining and adapting graph conditional shift,” arXiv preprint arXiv:2306.03256, 2023.
- M. Liu, H. Gao, and S. Ji, “Towards deeper graph neural networks,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020, pp. 338–348.
- Y. Liu, C. Zhou, S. Pan, J. Wu, Z. Li, H. Chen, and P. Zhang, “Curvdrop: A ricci curvature based approach to prevent graph neural networks from over-smoothing and over-squashing,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 221–230.
- A. Deac, M. Lackenby, and P. Veličković, “Expander graph propagation,” in Learning on Graphs Conference. PMLR, 2022, pp. 38–1.
- A. Arnaiz-Rodríguez, A. Begga, F. Escolano, and N. Oliver, “Diffwire: Inductive graph rewiring via the lov\\\backslash\’asz bound,” arXiv preprint arXiv:2206.07369, 2022.
- N. Huang, S. Villar, C. E. Priebe, D. Zheng, C. Huang, L. Yang, and V. Braverman, “From local to global: Spectral-inspired graph neural networks,” arXiv preprint arXiv:2209.12054, 2022.
- K. Karhadkar, P. K. Banerjee, and G. Montúfar, “Fosr: First-order spectral rewiring for addressing oversquashing in gnns,” arXiv preprint arXiv:2210.11790, 2022.
- U. Alon and E. Yahav, “On the bottleneck of graph neural networks and its practical implications,” arXiv preprint arXiv:2006.05205, 2020.
- J. H. Giraldo, F. D. Malliaros, and T. Bouwmans, “Understanding the relationship between over-smoothing and over-squashing in graph neural networks,” arXiv preprint arXiv:2212.02374, 2022.
- B. Gutteridge, X. Dong, M. M. Bronstein, and F. Di Giovanni, “Drew: Dynamically rewired message passing with delay,” in International Conference on Machine Learning. PMLR, 2023, pp. 12 252–12 267.
- D. Tortorella and A. Micheli, “Leave graphs alone: Addressing over-squashing without rewiring,” arXiv preprint arXiv:2212.06538, 2022.
- M. Black, Z. Wan, A. Nayyeri, and Y. Wang, “Understanding oversquashing in gnns through the lens of effective resistance,” in International Conference on Machine Learning. PMLR, 2023, pp. 2528–2547.
- P. K. Banerjee, K. Karhadkar, Y. G. Wang, U. Alon, and G. Montúfar, “Oversquashing in gnns through the lens of information contraction and graph expansion,” in 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2022, pp. 1–8.
- D. Shi, Y. Guo, Z. Shao, and J. Gao, “How curvature enhance the adaptation power of framelet gcns,” arXiv preprint arXiv:2307.09768, 2023.
- D. Beaini, S. Passaro, V. Létourneau, W. Hamilton, G. Corso, and P. Liò, “Directional graph networks,” in International Conference on Machine Learning. PMLR, 2021, pp. 748–758.
- Z. Shao, D. Shi, A. Han, Y. Guo, Q. Zhao, and J. Gao, “Unifying over-smoothing and over-squashing in graph neural networks: A physics informed approach and beyond,” arXiv preprint arXiv:2309.02769, 2023.
- A. Gravina, D. Bacciu, and C. Gallicchio, “Anti-symmetric dgn: A stable architecture for deep graph networks,” arXiv preprint arXiv:2210.09789, 2022.
- K. Nguyen, N. M. Hieu, V. D. Nguyen, N. Ho, S. Osher, and T. M. Nguyen, “Revisiting over-smoothing and over-squashing using ollivier-ricci curvature,” in International Conference on Machine Learning. PMLR, 2023, pp. 25 956–25 979.
- C. Sanders, A. Roth, and T. Liebig, “Curvature-based pooling within graph neural networks,” arXiv preprint arXiv:2308.16516, 2023.
- Q. Sun, J. Li, H. Yuan, X. Fu, H. Peng, C. Ji, Q. Li, and P. S. Yu, “Position-aware structure learning for graph topology-imbalance by relieving under-reaching and over-squashing,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1848–1857.
- R. Chen, S. Zhang, Y. Li et al., “Redundancy-free message passing for graph neural networks,” Advances in Neural Information Processing Systems, vol. 35, pp. 4316–4327, 2022.
- J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein, “Understanding over-squashing and bottlenecks on graphs via curvature,” arXiv preprint arXiv:2111.14522, 2021.
- H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, “Geom-gcn: Geometric graph convolutional networks,” arXiv preprint arXiv:2002.05287, 2020.
- A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore, “Automating the construction of internet portals with machine learning,” Information Retrieval, vol. 3, pp. 127–163, 2000.
- P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
- G. Namata, B. London, L. Getoor, B. Huang, and U. Edu, “Query-driven active surveying for collective classification,” in 10th international workshop on mining and learning with graphs, vol. 8, 2012, p. 1.
- B. Rozemberczki, C. Allen, and R. Sarkar, “Multi-scale attributed node embedding,” Journal of Complex Networks, vol. 9, no. 2, p. cnab014, 2021.
- J. Tang, J. Sun, C. Wang, and Z. Yang, “Social influence analysis in large-scale networks,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 2009, pp. 807–816.
- C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann, “Tudataset: A collection of benchmark datasets for learning with graphs,” arXiv preprint arXiv:2007.08663, 2020.
- S. Akansha (4 papers)