Robust Stochastic Graph Generator for Counterfactual Explanations (2312.11747v2)
Abstract: Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph similar to the original one, with a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural LLMling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake quantitative and qualitative analyses to compare RSGG-CE's performance against SoA generative explainers, highlighting its increased ability to engendering plausible counterfactual candidates.
- Counterfactual graphs for explainable classification of brain networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2495–2504.
- CoRoNNa: a deep sequential framework to predict epidemic spread. In Proceedings of the 36th Annual ACM Symposium on Applied Computing, 10–17.
- Robust counterfactual explanations on graph neural networks. Advances in Neural Information Processing Systems, 34: 5644–5655.
- Mathematical Tools for Data Mining-Set Theory, Partial Orders, Combinatorics.
- Effective feature learning with unsupervised learning for improving the predictive models in massive open online courses. In Proceedings of the 9th international conference on learning analytics & knowledge, 135–144.
- How Powerful are K-hop Message Passing Graph Neural Networks. In NeurIPS.
- Understanding dropouts in MOOCs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 517–524.
- Guidotti, R. 2022. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, 1–55.
- A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5): 1–42.
- VCNet: A self-explaining model for realistic counterfactual generation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 437–453. Springer.
- SkipGNN: predicting molecular interactions with skip-graph networks. Scientific reports, 10(1): 1–16.
- Global counterfactual explainer for graph neural networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 141–149.
- Variational graph auto-encoders. In NeurIPS Workshop on Bayesian Deep Learning.
- Multi-objective Explanations of GNN Predictions. In 2021 IEEE International Conference on Data Mining (ICDM), 409–418. IEEE.
- The graph matching problem. Pattern Analysis and Applications, 16: 253–283.
- Loyola-González, O. 2019. Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses From a Practical Point of View. IEEE Access, 7: 154096–154113.
- CLEAR: Generative Counterfactual Explanations on Graphs. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems.
- A feature-learning-based method for the disease-gene prediction problem. International Journal of Data Mining and Bioinformatics, 24(1): 16–37.
- CounteRGAN: Generating counterfactuals for real-time recourse and interpretability using residual GANs. In Cussens, J.; and Zhang, K., eds., Uncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands, volume 180 of Proceedings of Machine Learning Research, 1488–1497. PMLR.
- Explaining Black Box Drug Target Prediction through Model Agnostic Counterfactual Samples. IEEE/ACM Transactions on Computational Biology and Bioinformatics.
- Meg: Generating molecular counterfactual explanations for deep graph networks. In 2021 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE.
- Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology.
- Revisiting CounteRGAN for Counterfactual Explainability of Graphs. In Maughan, K.; Liu, R.; and Burns, T. F., eds., The First Tiny Papers Track at ICLR 2023, Tiny Papers @ ICLR 2023, Kigali, Rwanda, May 5, 2023. OpenReview.net.
- A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges. ACM Comput. Surv.
- Gretel: Graph counterfactual explanation evaluation framework. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 4389–4393.
- A self-supervised algorithm to detect signs of social isolation in the elderly from daily activity sequences. Artificial Intelligence in Medicine, 135: 102454.
- Hidden space deep sequential risk prediction on student trajectories. Future Generation Computer Systems, 125: 532–543.
- A reproducibility study of deep and surface machine learning methods for human-related trajectory prediction. In Proc. of the 29th ACM Int. Conf. on Inf. & Knowl. Management, 2169–2172.
- Adapting to Change: Robust Counterfactual Explanations in Dynamic Data Landscapes. arXiv preprint arXiv:2308.02353.
- The graph neural network model. IEEE Trans. on Neural Networks, 20(1): 61–80.
- Learning and Evaluating Graph Neural Network Explanations Based on Counterfactual and Factual Reasoning. In Proceedings of the ACM Web Conference 2022, WWW ’22, 1018–1027. New York, NY, USA: Association for Computing Machinery. ISBN 9781450390965.
- Predicting process performance: A white-box approach based on process models. Journal of Software: Evolution and Process, 31(6): e2170.
- Temporal deep learning architecture for prediction of COVID-19 cases in India. Expert Systems with Applications, 195: 116611.
- Explainable image classification with evidence counterfactual. Pattern Analysis and Applications, 25(2): 315–335.
- Deep model for dropout prediction in MOOCs. In Proceedings of the 2nd international conference on crowd science and engineering, 26–32.
- Dual subgraph-based graph neural network for friendship prediction in location-based social networks. ACM Transactions on Knowledge Discovery from Data (TKDD).
- Model agnostic generation of counterfactual explanations for molecules. Chemical science, 13(13): 3697–3705.
- Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 346–353.
- Clare: A semi-supervised community detection algorithm. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2059–2069.
- Session-based Recommendation with Heterogeneous Graph Neural Networks. In International Joint Conference on Neural Networks, IJCNN 2021, Shenzhen, China, July 18-22, 2021, 1–8. IEEE.
- Counterfactual Editing for Search Result Explanation. arXiv preprint arXiv:2301.10389.
- Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32.
- Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
- OCTET: Object-Aware Counterfactual Explanations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15062–15071.
- Ris-gan: Explore residual and illumination with generative adversarial networks for shadow removal. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 12829–12836.