A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy (2306.08604v1)
Abstract: Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data. However, recent works show that GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions of the attacker. In addition, training data of GNNs can be leaked under membership inference attacks. This largely hinders the adoption of GNNs in high-stake domains such as e-commerce, finance and bioinformatics. Though investigations have been made in conducting robust predictions and protecting membership privacy, they generally fail to simultaneously consider the robustness and membership privacy. Therefore, in this work, we study a novel problem of developing robust and membership privacy-preserving GNNs. Our analysis shows that Information Bottleneck (IB) can help filter out noisy information and regularize the predictions on labeled samples, which can benefit robustness and membership privacy. However, structural noises and lack of labels in node classification challenge the deployment of IB on graph-structured data. To mitigate these issues, we propose a novel graph information bottleneck framework that can alleviate structural noises with neighbor bottleneck. Pseudo labels are also incorporated in the optimization to minimize the gap between the predictions on the labeled set and unlabeled set for membership privacy. Extensive experiments on real-world datasets demonstrate that our method can give robust predictions and simultaneously preserve membership privacy.
- Deep learning with differential privacy. In CCS. 308–318.
- Deep variational information bottleneck. arXiv preprint arXiv:1612.00410 (2016).
- Molecular generative Graph Neural Networks for Drug Discovery. Neurocomputing 450 (2021), 242–252.
- Differentially private empirical risk minimization. JMLR 12, 3 (2011).
- Understanding structural vulnerability in graph convolutional networks. arXiv preprint arXiv:2108.06280 (2021).
- Label-only membership inference attacks. In ICML. PMLR, 1964–1974.
- NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs. arXiv preprint arXiv:2106.04714 (2021).
- Towards robust graph neural networks for noisy graphs with sparse labels. In WSDM. 181–191.
- Unnoticeable Backdoor Attacks on Graph Neural Networks. In WWW. 2263–2273.
- Enyan Dai and Suhang Wang. 2021a. Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information. In WSDM. 680–688.
- Enyan Dai and Suhang Wang. 2021b. Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 302–311.
- A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. arXiv preprint arXiv:2204.08570 (2022).
- Label-Wise Graph Convolutional Network for Heterophilic Graphs. In Learning on Graphs Conference. https://openreview.net/forum?id=HRmby7yVVuF
- Adversarial attack on graph structured data. ICML (2018).
- All You Need Is Low (Rank) Defending Against Adversarial Attacks on Graphs. In WSDM. 169–177.
- Robustness of graph neural networks at scale. NeurIPS 34 (2021), 7637–7649.
- Inductive representation learning on large graphs. In NeurIPS. 1024–1034.
- Logan: Membership inference attacks against generative models. arXiv preprint arXiv:1705.07663 (2017).
- Node-level membership inference attacks against graph neural networks. arXiv preprint arXiv:2102.05429 (2021).
- Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670 (2018).
- ZINC: a free tool to discover chemistry for biology. Journal of chemical information and modeling 52, 7 (2012), 1757–1768.
- Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016).
- Graph structure learning for robust graph neural networks. In SIGKDD. 66–74.
- Distilling robust and non-robust features in adversarial examples by information bottleneck. NeurIPS 34 (2021), 17148–17159.
- Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
- Dong-Hyun Lee et al. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, Vol. 3. 896.
- Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. In SIGKDD. 925–935.
- Elastic graph neural networks. In ICML. PMLR, 6837–6849.
- Haohui Lu and Shahadat Uddin. 2021. A weighted patient network-based framework for predicting chronic diseases using graph neural networks. Scientific reports 11, 1 (2021), 22607.
- Interpretable and generalizable graph learning via stochastic attention mechanism. In ICML. PMLR, 15524–15543.
- Machine learning with membership privacy using adversarial regularization. In CCS. 634–646.
- Membership inference attack on graph neural networks. In TPS-ISA. IEEE, 11–20.
- Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016).
- Gcc: Graph contrastive coding for graph neural network pre-training. In SIGKDD. 1150–1160.
- Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).
- Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In CCS. 1310–1321.
- Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, 3–18.
- Graph structure learning with variational information bottleneck. In AAAI, Vol. 36. 4165–4174.
- Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In WSDM. 600–608.
- The information bottleneck method. arXiv preprint physics/0004057 (2000).
- Graph attention networks. ICLR (2018).
- A Semi-supervised Graph Attentive Network for Financial Fraud Detection. In ICDM. IEEE, 598–607.
- Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness. NeurIPS 34 (2021), 586–597.
- Adapting membership inference attacks to gnn for graph classification: Approaches and implications. In ICDM. IEEE, 1421–1426.
- Simplifying graph convolutional networks. In ICML. PMLR, 6861–6871.
- Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610 (2019).
- Graph information bottleneck. NeurIPS 33 (2020), 20437–20448.
- Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214 (2019).
- Graph convolutional neural networks for web-scale recommender systems. In SIGKDD. 974–983.
- Graph information bottleneck for subgraph recognition. arXiv preprint arXiv:2010.05563 (2020).
- GraphSAINT: Graph Sampling Based Inductive Learning Method. In ICLR.
- Xiang Zhang and Marinka Zitnik. 2020. GNNGuard: Defending Graph Neural Networks against Adversarial Attacks. In NeurIPS, Vol. 33. 9263–9275.
- Semi-Supervised Graph-to-Graph Translation. In CIKM. 1863–1872.
- Robust graph convolutional networks against adversarial attacks. In SIGKDD. 1399–1407.
- Tdgia: Effective injection attacks on graph neural networks. In SIGKDD, pages=2461–2471, year=2021.
- Adversarial attacks on neural networks for graph data. In SIGKDD. 2847–2856.
- Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In ICLR.
- Enyan Dai (32 papers)
- Limeng Cui (19 papers)
- Zhengyang Wang (48 papers)
- Xianfeng Tang (62 papers)
- Yinghan Wang (7 papers)
- Monica Cheng (3 papers)
- Bing Yin (56 papers)
- Suhang Wang (118 papers)