A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability (2204.08570v2)
Abstract: Graph Neural Networks (GNNs) have made rapid developments in the recent years. Due to their great ability in modeling graph-structured data, GNNs are vastly used in various applications, including high-stakes scenarios such as financial analysis, traffic predictions, and drug discovery. Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society. For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. GNNs trained on social networks may embed the discrimination in their decision process, strengthening the undesirable societal bias. Consequently, trustworthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users' trust in GNNs. In this paper, we give a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability. For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthiness.
- Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (2016), pp. 308–318.
- The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery (2005), pp. 36–43.
- Sanity checks for saliency maps. Advances in neural information processing systems 31 (2018).
- Towards a unified framework for fair and stable graph representation learning. arXiv preprint arXiv:2102.13186 (2021).
- Graph-based deep learning for medical diagnosis and analysis: past, present and future. Sensors 21, 14 (2021), 4758.
- Privacy-preserving machine learning: Threats and solutions. IEEE Security & Privacy 17, 2 (2019), 49–58.
- On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049 (2018).
- Local differential privacy for deep learning. IEEE Internet of Things Journal 7, 7 (2019), 5827–5842.
- Wasserstein generative adversarial networks. In International conference on machine learning (2017), PMLR, pp. 214–223.
- Arora, S. A survey on graph neural networks for knowledge graph completion. arXiv preprint arXiv:2007.12374 (2020).
- Uci machine learning repository, 2007.
- A diagnostic study of explainability techniques for text classification. arXiv preprint arXiv:2009.13295 (2020).
- On the efficiency of the information networks in social media. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining (2016), pp. 83–92.
- On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10, 7 (2015), e0130140.
- How to explain individual classification decisions. The Journal of Machine Learning Research 11 (2010), 1803–1831.
- Robust Counterfactual Explanations on Graph Neural Networks. In Conference on Neural Information Processing Systems (NeurIPS) (2021), vol. 34, pp. 5644–5655.
- Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686 (2019).
- dalex: Responsible machine learning with interactive explainability and fairness in python. arXiv preprint arXiv:2012.14406 (2020).
- Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075 (2017).
- Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning (2019), pp. 695–704.
- Certifiable robustness to graph perturbations. Advances in Neural Information Processing Systems 32 (2019).
- Compositional fairness constraints for graph embeddings. In International Conference on Machine Learning (2019), PMLR, pp. 715–724.
- Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP) (2021), IEEE, pp. 141–159.
- Debayes: a bayesian method for debiasing network embeddings. In International Conference on Machine Learning (2020), PMLR, pp. 1220–1229.
- Measuring user influence in twitter: The million follower fallacy. In Proceedings of the international AAAI conference on web and social media (2010), vol. 4.
- A restricted black-box adversarial framework towards attacking graph embedding models. In Proceedings of the AAAI Conference on Artificial Intelligence (2020), vol. 34, pp. 3389–3396.
- Fastgcn: fast learning with graph convolutional networks via importance sampling. arXiv preprint arXiv:1801.10247 (2018).
- Link prediction adversarial attack. arXiv preprint arXiv:1810.01110 (2018).
- Can adversarial network attack be defended? arXiv preprint arXiv:1903.05994 (2019).
- Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018).
- A survey of adversarial learning on graphs. arXiv preprint arXiv:2003.05730 (2020).
- Understanding structural vulnerability in graph convolutional networks. arXiv preprint arXiv:2108.06280 (2021).
- Graph unlearning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (2022), pp. 499–513.
- Iterative deep graph learning for graph neural networks: Better and robust node embeddings. Advances in Neural Information Processing Systems 33 (2020), 19314–19326.
- Understanding and improving graph injection attack by promoting unnoticeability. arXiv preprint arXiv:2202.08057 (2022).
- Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs. In Conference on Neural Information Processing Systems (NeurIPS) (2022), vol. 35, pp. 22131–22148.
- Supervised community detection with line graph neural networks. arXiv preprint arXiv:1705.08415 (2017).
- Risk assessment for networked-guarantee loans using high-order graph attention representation. In IJCAI (2019), pp. 5822–5828.
- Efficient model updates for approximate unlearning of graph-structured data. In The Eleventh International Conference on Learning Representations (2022).
- Nrgnn: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 227–236.
- A unified framework of graph information bottleneck for robustness and membership privacy. arXiv preprint arXiv:2306.08604 (2023).
- Towards robust graph neural networks for noisy graphs with sparse labels. arXiv preprint arXiv:2201.00232 (2022).
- Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023 (2023), pp. 2263–2273.
- Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (2021), pp. 680–688.
- Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021), pp. 302–311.
- Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371 (2018).
- Adversarial training methods for network embedding. In The World Wide Web Conference (2019), pp. 329–339.
- A survey of the state of explainable ai for natural language processing. arXiv preprint arXiv:2010.00711 (2020).
- Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry 34, 2 (1991), 786–797.
- Batch virtual adversarial training for graph convolutional networks. arXiv preprint arXiv:1902.09192 (2019).
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Individual fairness for graph neural networks: A ranking based approach. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 300–310.
- Edits: Modeling and mitigating data bias for graph neural networks. arXiv preprint arXiv:2108.05233 (2021).
- Do the young live in a “smaller world” than the old? age-specific degrees of separation in a large-scale mobile communication network. arXiv preprint arXiv:1606.07556 (2016).
- Fairness in graph mining: A survey. IEEE Transactions on Knowledge and Data Engineering (2023).
- Interpreting unfairness in graph neural networks via training node attribution. In Proceedings of the AAAI Conference on Artificial Intelligence (2023), vol. 37, pp. 7441–7449.
- On structural explanation of bias in graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022), pp. 316–326.
- Quantifying privacy leakage in graph embedding. arXiv preprint arXiv:2010.00906 (2020).
- Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (2012), pp. 214–226.
- Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference (2006), Springer, pp. 265–284.
- The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9, 3-4 (2014), 211–407.
- Censoring representations with an adversary. arXiv preprint arXiv:1511.05897 (2015).
- All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining (2020), pp. 169–177.
- When comparing to ground truth is wrong: On evaluating gnn explanation methods. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 332–341.
- Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure. In Conference on Neural Information Processing Systems (NeurIPS) (2022), vol. 35, pp. 24934–24946.
- Jointly attacking graph neural network and its explanations. arXiv preprint arXiv:2108.03388 (2021).
- Graph neural networks for social recommendation. In The World Wide Web Conference (2019), pp. 417–426.
- Graph adversarial training: Dynamically regularizing based on graph structure. arXiv preprint arXiv:1902.08226 (2019).
- Adversarial graph contrastive learning with information regularization. In Proceedings of the ACM Web Conference 2022 (2022), pp. 1362–1371.
- Hard masking for explaining graph neural networks.
- Hard masking for explaining graph neural networks, 2021.
- Zorro: Valid, sparse, and stable explanations in graph neural networks. arXiv preprint arXiv:2105.08621 (2021).
- Gnes: Learning to explain graph neural networks. In 2021 IEEE International Conference on Data Mining (ICDM) (2021), pp. 131–140.
- Applications of community detection techniques to brain graphs: Algorithmic considerations and implications for neural function. Proceedings of the IEEE 106, 5 (2018), 846–867.
- Robustness of graph neural networks at scale. Advances in Neural Information Processing Systems 34 (2021).
- Neural message passing for quantum chemistry. In International conference on machine learning (2017), PMLR, pp. 1263–1272.
- Community structure in social and biological networks. Proceedings of the national academy of sciences 99, 12 (2002), 7821–7826.
- Generative adversarial nets. Advances in neural information processing systems 27 (2014).
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
- Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019).
- Learning robust representation through graph adversarial contrastive learning. In International Conference on Database Systems for Advanced Applications (2022), Springer, pp. 682–697.
- Towards fair graph neural networks via graph counterfactual. arXiv preprint arXiv:2307.04937 (2023).
- Counterfactual learning on graphs: A survey. arXiv preprint arXiv:2304.01391 (2023).
- Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (2017), pp. 1025–1035.
- Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323.
- Explainable predictive business process monitoring using gated graph neural networks. Journal of Decision Systems 29, sup1 (2020), 312–327.
- The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5, 4 (2015), 1–19.
- Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning (2018), PMLR, pp. 1929–1938.
- Spreadgnn: Serverless multi-task federated learning for graph neural networks. arXiv preprint arXiv:2106.02743 (2021).
- Stealing links from graph neural networks. In 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21) (2021).
- Node-level membership inference attacks against graph neural networks. arXiv preprint arXiv:2102.05429 (2021).
- Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 (2019).
- Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 1857–1867.
- Heterogeneous graph transformer. In Proceedings of The Web Conference 2020 (2020), pp. 2704–2710.
- Graphlime: Local interpretable model explanations for graph neural networks. arXiv preprint arXiv:2001.06216 (2020).
- Global Counterfactual Explainer for Graph Neural Networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining (2023), ACM.
- An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems 51, 1 (2011), 141–154.
- Evaluating differentially private machine learning in practice. In 28th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 19) (2019), pp. 1895–1912.
- Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584 (2014).
- Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. Journal of cheminformatics 13, 1 (2021), 1–23.
- Certified robustness of graph convolution networks for graph classification under topological attacks. Advances in Neural Information Processing Systems 33 (2020), 8463–8474.
- Power up! robust graph convolutional network against evasion attacks based on graph powering. arXiv preprint arXiv:1905.10029 (2019).
- Node similarity preserving graph convolutional networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (2021), pp. 148–156.
- Adversarial attacks and defenses on graphs: A review, a tool and empirical studies. arXiv preprint arXiv:2003.00653 (2020).
- Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 66–74.
- The effect of race/ethnicity on sentencing: Examining sentence type, jail length, and prison length. Journal of Ethnicity in Criminal Justice 13, 3 (2015), 179–196.
- Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019).
- Inform: Individual fairness on graph mining. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 379–389.
- What can we learn privately? SIAM Journal on Computing 40, 3 (2011), 793–826.
- Brainnetcnn: Convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage 146 (2017), 1038–1049.
- Crosswalk: Fairness-enhanced node representation learning. arXiv preprint arXiv:2105.02725 (2021).
- How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations (2020).
- Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
- Variational graph auto-encoders. arXiv abs/1611.07308 (2016).
- Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997 (2018).
- Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
- Fairness-aware node representation learning. arXiv preprint arXiv:2106.05391 (2021).
- Demystifying and mitigating bias for node representation learning. IEEE Transactions on Neural Networks and Learning Systems (2023).
- Too much, too little, or just right? ways explanations impact end users’ mental models. In 2013 IEEE Symposium on visual languages and human centric computing (2013), IEEE, pp. 3–10.
- Counterfactual fairness. arXiv preprint arXiv:1703.06856 (2017).
- Fairness without demographics through adversarially reweighted learning. arXiv preprint arXiv:2006.13114 (2020).
- Learning to discover social circles in ego networks. Advances in neural information processing systems 25 (2012).
- Visualizing the loss landscape of neural nets. Advances in neural information processing systems 31 (2018).
- Adversarial attack on large scale graph. IEEE Transactions on Knowledge and Data Engineering (2021).
- Adversarial privacy-preserving graph embedding against inference attack. IEEE Internet of Things Journal 8, 8 (2020), 6904–6915.
- On dyadic fairness: Exploring and mitigating bias in graph connections. In International Conference on Learning Representations (2020).
- Braingnn: Interpretable brain graph neural network for fmri analysis. Medical Image Analysis 74 (2021), 102233.
- Graph neural network-based diagnosis prediction. Big Data 8, 5 (2020), 379–390.
- Unified robust training for graph neural networks against label noise. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (2021), Springer, pp. 528–540.
- Credit risk and limits forecasting in e-commerce consumer lending service via multi-view-aware mixture-of-experts nets. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (2021), pp. 229–237.
- Learning generative adversarial representations (gap) under fairness and censoring constraints. arXiv preprint arXiv:1910.00411 (2019).
- Information obfuscation of graph neural networks. In International Conference on Machine Learning (2021), PMLR, pp. 6600–6610.
- Spectral augmentation for self-supervised learning on graphs. ICLR (2023).
- Generative causal explanations for graph neural networks. In International Conference on Machine Learning (2021), PMLR, pp. 6666–6679.
- Learning fair graph representations via automated data augmentations. In The Eleventh International Conference on Learning Representations (2022).
- Trustworthy ai: A computational perspective. arXiv preprint arXiv:2107.06641 (2021).
- Federated social recommendation with graph neural network. arXiv preprint arXiv:2111.10778 (2021).
- On the fairness of disentangled representations. arXiv preprint arXiv:1905.13662 (2019).
- Pre-training graph neural networks for link prediction in biomedical networks.
- Cf-gnnexplainer: Counterfactual explanations for graph neural networks. In International Conference on Artificial Intelligence and Statistics (2022), PMLR, pp. 4499–4511.
- Parameterized explainer for graph neural network. arXiv preprint arXiv:2011.04573 (2020).
- Learning to drop: Robust graph neural network via topological denoising. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (2021), pp. 779–787.
- Auto-encoder based graph convolutional networks for online financial anti-fraud. In 2019 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr) (2019), IEEE, pp. 1–6.
- Towards more practical adversarial attacks on graph neural networks. Advances in neural information processing systems 33 (2020), 4756–4766.
- Learning fair node representations with graph counterfactual fairness. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (2022), pp. 695–703.
- Graph adversarial attack via rewiring. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 1161–1169.
- Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309 (2018).
- Bursting the filter bubble: Fairness-aware network link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (2020), vol. 34, pp. 841–848.
- Image labeling on a network: using social-network metadata for image classification. In European conference on computer vision (2012), Springer, pp. 828–841.
- Learning to discover social circles in ego networks. In NIPS (2012), vol. 2012, Citeseer, pp. 548–56.
- Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (2017), PMLR, pp. 1273–1282.
- A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
- Exacerbating algorithmic bias through fairness attacks. arXiv preprint arXiv:2012.08723 (2020).
- Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
- Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency (2019), pp. 279–288.
- Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 (2020).
- Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, 44 (2019), 22071–22080.
- From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. arXiv preprint arXiv:2201.08164 (2022).
- A dual heterogeneous graph attention network to improve long-tail performance for shop search in e-commerce. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 3405–3415.
- Releasing graph neural networks with differential privacy guarantees. arXiv preprint arXiv:2109.08907 (2021).
- Membership inference attack on graph neural networks. arXiv preprint arXiv:2101.06570 (2021).
- Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2 (2019), 13.
- Monet: Debiasing graph embeddings via the metadata-orthogonal training unit. arXiv preprint arXiv:1909.11793 (2019).
- Tri-party deep network representation. Network 11, 9 (2016), 12.
- Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016).
- Decentralized federated graph neural networks. In International Workshop on Federated and Transfer Learning for Data Sparsity and Confidentiality in Conjunction with IJCAI (2021).
- Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 10772–10781.
- Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 1150–1160.
- Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 1 (2018), 61–79.
- Fairwalk: Towards fair graph embedding.
- xfraud: Explainable fraud transaction detection on heterogeneous graphs. arXiv preprint arXiv:2011.12193 (2020).
- ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (2016), pp. 1135–1144.
- A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646 (2020).
- Perturbation-based explanations of prediction models. In Human and machine learning. Springer, 2018, pp. 159–175.
- Self-supervised graph transformer on large-scale molecular data. arXiv preprint arXiv:2007.02835 (2020).
- Locally private graph neural networks. arXiv preprint arXiv:2006.05535 (2020).
- Graph neural networks for friend ranking in large-scale social platforms. In Proceedings of the Web Conference 2021 (2021), pp. 2535–2546.
- Modeling relational data with graph convolutional networks. In European semantic web conference (2018), Springer, pp. 593–607.
- Interpreting graph neural networks for nlp with differentiable edge masking. arXiv preprint arXiv:2010.00577 (2020).
- Higher-order explanations of graph neural networks via relevant walks. arXiv preprint arXiv:2006.03589 (2020).
- Layerwise relevance visualization in convolutional text graph classifiers. arXiv preprint arXiv:1909.10911 (2019).
- Collective classification in network data. AI magazine 29, 3 (2008), 93–93.
- Certifai: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020), pp. 166–172.
- Overlapping community detection with graph neural networks. arXiv preprint arXiv:1909.12201 (2019).
- Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018).
- Model stealing attacks against inductive graph neural networks. arXiv preprint arXiv:2112.08331 (2021).
- Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP) (2017), IEEE, pp. 3–18.
- Fairness in algorithmic decision-making: Applications in multi-winner voting, machine learning, and recommender systems. Algorithms 12, 9 (2019), 199.
- Smuha, N. Ethics guidelines for trustworthy ai. In AI & Ethics, Date: 2019/05/28-2019/05/28, Location: Brussels (Digityser), Belgium (2019).
- Introduction to stochastic actor-based models for network dynamics. Social networks 32, 1 (2010), 44–60.
- Poisoning attacks on algorithmic fairness. arXiv preprint arXiv:2004.07401 (2020).
- Biased edge dropout for enhancing fairness in graph representation learning. arXiv preprint arXiv:2104.14210 (2021).
- Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014).
- Algorithmic glass ceiling in social networks: The effects of social recommendations on network diversity. In Proceedings of the 2018 World Wide Web Conference (2018), pp. 923–932.
- Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528 (2018).
- Deep intellectual property: A survey. arXiv preprint arXiv:2304.14613 (2023).
- Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020 (2020), pp. 673–683.
- Axiomatic attribution for deep networks. In International Conference on Machine Learning (2017), PMLR, pp. 3319–3328.
- A framework for understanding unintended consequences of machine learning.
- Deep representation learning for social network analysis. Frontiers in big Data 2 (2019), 2.
- Adversarial attack on hierarchical graph pooling neural networks. arXiv preprint arXiv:2005.11560 (2020).
- Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the 13th International Conference on Web Search and Data Mining (2020), pp. 600–608.
- Investigating and mitigating degree-related biases in graph convoltuional networks. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (2020), pp. 1435–1444.
- Single node injection attack against graph neural networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021), pp. 1794–1803.
- Poisoning attacks on fair machine learning. arXiv preprint arXiv:2110.08932 (2021).
- Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
- Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review. arXiv (2020).
- Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. arXiv preprint arXiv:2010.05788 (2020).
- Grove: Ownership verification of graph neural networks using embeddings. arXiv preprint arXiv:2304.08566 (2023).
- Privacy-preserving representation learning on graphs: A mutual information perspective. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 1667–1676.
- Certified robustness of graph neural networks against adversarial structural perturbation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 1645–1653.
- Graphfl: A federated learning framework for semi-supervised node classification on graphs. arXiv preprint arXiv:2012.04187 (2020).
- Dkn: Deep knowledge-aware network for news recommendation. In Proceedings of the 2018 world wide web conference (2018), pp. 1835–1844.
- Robust unsupervised graph representation learning via mutual information maximization. arXiv preprint arXiv:2201.08557 (2022).
- Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery 34, 5 (2020), 1363–1389.
- A review on graph neural network methods in financial applications. arXiv preprint arXiv:2111.15367 (2021).
- Learning robust representations with graph denoising policy network. In 2019 IEEE International Conference on Data Mining (ICDM) (2019), IEEE, pp. 1378–1383.
- Unbiased graph embedding with biased graph observations. arXiv preprint arXiv:2110.13957 (2021).
- Heterogeneous graph attention network. In The World Wide Web Conference (2019), pp. 2022–2032.
- Unsupervised learning for community detection in attributed networks based on graph convolutional network. Neurocomputing 456 (2021), 147–155.
- Graphdefense: Towards robust graph convolutional networks. arXiv preprint arXiv:1911.04429 (2019).
- Causal screening to interpret graph neural networks.
- Towards multi-grained explainability for graph neural networks. Advances in Neural Information Processing Systems 34 (2021).
- Improving fairness in graph neural networks via mitigating sensitive attribute leakage. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022), pp. 1938–1948.
- Recent advances in reliable deep graph learning: Adversarial attack, inherent noise, and distribution shift. arXiv preprint arXiv:2202.07114 (2022).
- Model extraction attacks on graph neural networks: Taxonomy and realization. arXiv preprint arXiv:2010.12751 (2020).
- Adapting membership inference attacks to gnn for graph classification: Approaches and implications. arXiv preprint arXiv:2110.08760 (2021).
- Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925 (2021).
- Adversarial examples for graph data: deep insights into attack and defense. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (2019), AAAI Press, pp. 4816–4823.
- Gif: A general graph unlearning strategy via influence function. In Proceedings of the ACM Web Conference 2023 (2023), pp. 651–661.
- Certified edge unlearning for graph neural networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2023), pp. 2606–2617.
- Graph information bottleneck. Advances in Neural Information Processing Systems 33 (2020), 20437–20448.
- Discovering Invariant Rationales for Graph Neural Networks. In International Conference on Learning Representations (ICLR) (2022).
- Moleculenet: a benchmark for molecular machine learning. Chemical science 9, 2 (2018), 513–530.
- Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21) (2021), pp. 1523–1540.
- Learning how to propagate messages in graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 1894–1903.
- Federated graph classification over non-iid graphs. Advances in Neural Information Processing Systems 34 (2021).
- Looking deeper into deep learning model: Attribution-based explanations of textcnn. arXiv preprint arXiv:1811.03970 (2018).
- Towards consumer loan fraud detection: Graph neural networks with role-constrained conditional random field. In Proceedings of the AAAI Conference on Artificial Intelligence (2021), vol. 35, pp. 4537–4545.
- Dpne: Differentially private network embedding. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (2018), Springer, pp. 235–246.
- Watermarking graph neural networks based on backdoor attacks. In 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P) (2023), IEEE, pp. 1179–1197.
- More is better (mostly): On the backdoor attacks in federated graph neural networks. arXiv preprint arXiv:2202.03195 (2022).
- Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214 (2019).
- Graph-based prediction of protein-protein interactions with attributed signed graph embedding. BMC bioinformatics 21, 1 (2020), 1–16.
- Local differential privacy and its applications: A comprehensive survey. arXiv preprint arXiv:2008.03686 (2020).
- Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1–19.
- Learn to explain efficiently via neural logic inductive learning. In International Conference on Learning Representations (2019).
- Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems 32 (2019), 9240.
- Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2018), pp. 974–983.
- Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems 33 (2020), 5812–5823.
- Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 430–438.
- Explainability in graph neural networks: A taxonomic survey. arXiv preprint arXiv:2012.15445 (2020).
- On explainability of graph neural networks via subgraph explorations. arXiv preprint arXiv:2102.05152 (2021).
- Fair representation learning for heterogeneous information networks. arXiv preprint arXiv:2104.08769 (2021).
- Trustworthy graph neural networks: Aspects, methods and trends. arXiv preprint arXiv:2205.07424 (2022).
- Data poisoning attack against knowledge graph embedding. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (2019), AAAI Press, pp. 4853–4859.
- Adversarial label-flipping attack and defense for graph neural networks. In 2020 IEEE International Conference on Data Mining (ICDM) (2020), IEEE, pp. 791–800.
- Robust heterogeneous graph neural networks against adversarial attacks.
- Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering 19, 1 (2018), 27–39.
- Graph embedding matrix sharing with differential privacy. IEEE Access 7 (2019), 89390–89399.
- Graph embedding for recommendation against attribute inference attacks. In Proceedings of the Web Conference 2021 (2021), pp. 3002–3014.
- Attributed graph clustering via adaptive graph convolution. arXiv preprint arXiv:1906.01210 (2019).
- Gnnguard: Defending graph neural networks against adversarial attacks. Advances in Neural Information Processing Systems 33 (2020), 9263–9275.
- Relex: A model-agnostic relational model explainer. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (2021), pp. 1042–1049.
- Inference attacks against graph neural networks. arXiv preprint arXiv:2110.02631 (2021).
- Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies (2021), pp. 15–26.
- Graphmi: Extracting private graph data from graph neural networks. arXiv preprint arXiv:2106.02820 (2021).
- Protgnn: Towards self-explaining graph neural networks, 2021.
- You can still achieve fairness without sensitive attributes: Exploring biases in non-sensitive features. arXiv preprint arXiv:2104.14537 (2021).
- Graphsmote: Imbalanced node classification on graphs with graph neural networks. In Proceedings of the 14th ACM international conference on web search and data mining (2021), pp. 833–841.
- Exploring edge disentanglement for node classification. In The Web Conference (2022).
- Watermarking graph neural networks by random graphs. In 2021 9th International Symposium on Digital Forensics and Security (ISDFS) (2021), IEEE, pp. 1–6.
- Asfgnn: Automated separated-federated graph neural network. Peer-to-Peer Networking and Applications 14, 3 (2021), 1692–1704.
- Vertically federated graph neural network for privacy-preserving node classification. arXiv preprint arXiv:2005.11903 (2020).
- Graph neural networks: A review of methods and applications. AI Open 1 (2020), 57–81.
- Robust graph convolutional networks against adversarial attacks.
- Fairness-aware message passing for graph neural networks. arXiv preprint arXiv:2306.11132 (2023).
- Deep graph structure learning for robust representations: A survey. arXiv preprint arXiv:2103.03036 (2021).
- Tdgia: Effective injection attacks on graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 2461–2471.
- Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2018), ACM, pp. 2847–2856.
- Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412 (2019).
- Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2019), pp. 246–256.
- Certifiable robustness of graph convolutional networks under structure perturbations. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (2020), pp. 1656–1665.