Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Link Stealing Attacks Against Inductive Graph Neural Networks (2405.05784v1)

Published 9 May 2024 in cs.CR and cs.LG

Abstract: A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data. Typically, GNNs can be implemented in two settings, including the transductive setting and the inductive setting. In the transductive setting, the trained model can only predict the labels of nodes that were observed at the training time. In the inductive setting, the trained model can be generalized to new nodes/graphs. Due to its flexibility, the inductive setting is the most popular GNN setting at the moment. Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks. However, a comprehensive privacy analysis of inductive GNN models is still missing. This paper fills the gap by conducting a systematic privacy analysis of inductive GNNs through the lens of link stealing attacks, one of the most popular attacks that are specifically designed for GNNs. We propose two types of link stealing attacks, i.e., posterior-only attacks and combined attacks. We define threat models of the posterior-only attacks with respect to node topology and the combined attacks by considering combinations of posteriors, node attributes, and graph features. Extensive evaluation on six real-world datasets demonstrates that inductive GNNs leak rich information that enables link stealing attacks with advantageous properties. Even attacks with no knowledge about graph structures can be effective. We also show that our attacks are robust to different node similarities and different graph features. As a counterpart, we investigate two possible defenses and discover they are ineffective against our attacks, which calls for more effective defenses.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. M. Backes, M. Humbert, J. Pang, and Y. Zhang, “walk2friends: Inferring Social Links from Mobility Profiles,” in ACM SIGSAC Conference on Computer and Communications Security (CCS).   ACM, 2017, pp. 1943–1957.
  2. A. Bojchevski and S. Günnemann, “Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking,” in International Conference on Learning Representations (ICLR), 2018.
  3. A. Bojchevski and S. Günnemann, “Adversarial Attacks on Node Embeddings via Graph Poisoning,” in International Conference on Machine Learning (ICML).   PMLR, 2019, pp. 695–704.
  4. Z. Cai, Z. He, X. Guan, and Y. Li, “Collective Data-Sanitization for Preventing Sensitive Information Inference Attacks in Social Networks,” IEEE Transactions on Dependable and Secure Computing, 2018.
  5. D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and Relieving the Over-Smoothing Problem for Graph Neural Networks from the Topological View,” in AAAI Conference on Artificial Intelligence (AAAI).   AAAI, 2020, pp. 3438–3445.
  6. Y. Chen, Y. Nadji, A. Kountouras, F. Monrose, R. Perdisci, M. Antonakakis, and N. Vasiloglou, “Practical Attacks Against Graph-based Clustering,” in ACM SIGSAC Conference on Computer and Communications Security (CCS).   ACM, 2017, pp. 1125–1142.
  7. H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial Attack on Graph Structured Data,” in International Conference on Machine Learning (ICML).   PMLR, 2018, pp. 1123–1132.
  8. D. DeFazio and A. Ramesh, “Adversarial Model Extraction on Graph Neural Networks,” CoRR abs/1912.07721, 2019.
  9. R. Ding, S. Duan, X. Xu, and Y. Fei, “VertexSerum: Poisoning Graph Neural Networks for Link Inference,” in IEEE International Conference on Computer Vision (ICCV).   IEEE, 2023, pp. 4509–4518.
  10. V. Duddu, A. Boutet, and V. Shejwalkar, “Quantifying Privacy Leakage in Graph Embedding,” CoRR abs/2010.00906, 2020.
  11. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs,” in ACM International Conference on Web Search and Data Mining (WSDM).   ACM, 2020, pp. 169–177.
  12. N. Z. Gong, A. Talwalkar, L. W. Mackey, L. Huang, E. C. R. Shin, E. Stefanov, E. Shi, and D. Song, “Joint Link Prediction and Attribute Inference Using a Social-Attribute Network,” ACM Transactions on Intelligent Systems and Technology, 2014.
  13. A. Grover and J. Leskovec, “node2vec: Scalable Feature Learning for Networks,” in ACM Conference on Knowledge Discovery and Data Mining (KDD).   ACM, 2016, pp. 855–864.
  14. W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive Representation Learning on Large Graphs,” in Annual Conference on Neural Information Processing Systems (NIPS).   NIPS, 2017, pp. 1025–1035.
  15. X. He, J. Jia, M. Backes, N. Z. Gong, and Y. Zhang, “Stealing Links from Graph Neural Networks,” in USENIX Security Symposium (USENIX Security).   USENIX, 2021, pp. 2669–2686.
  16. X. He, R. Wen, Y. Wu, M. Backes, Y. Shen, and Y. Zhang, “Node-Level Membership Inference Attacks Against Graph Neural Networks,” CoRR abs/2102.05429, 2021.
  17. G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” CoRR abs/1503.02531, 2015.
  18. T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” in International Conference on Learning Representations (ICLR), 2017.
  19. D. Liben-Nowell and J. Kleinberg, “The Link-prediction Problem for Social Networks,” Journal of the American Society for Information Science and Technology, 2007.
  20. J. Ma, S. Ding, and Q. Mei, “Towards More Practical Adversarial Attacks on Graph Neural Networks,” in Annual Conference on Neural Information Processing Systems (NeurIPS).   NeurIPS, 2020.
  21. J. J. McAuley, C. Targett, Q. Shi, and A. van den Hengel, “Image-Based Recommendations on Styles and Substitutes,” in International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR).   ACM, 2015, pp. 43–52.
  22. C. Q. Nguyen, C. Kreatsoulas, and K. M. Branson, “Meta-Learning GNN Initializations for Low-Resource Molecular Property Prediction,” in ICML Workshop on Graph Representation Learning and Beyond (GRL+).   ICML, 2020.
  23. I. E. Olatunji, W. Nejdl, and M. Khosla, “Membership Inference Attack on Graph Neural Networks,” CoRR abs/2101.06570, 2021.
  24. S. Pan, J. Wu, X. Zhu, C. Zhang, and Y. Wang, “Tri-Party Deep Network Representation,” in International Joint Conferences on Artifical Intelligence (IJCAI).   IJCAI, 2016, pp. 1895–1901.
  25. B. Rozemberczki and R. Sarkar, “Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models,” in ACM International Conference on Information and Knowledge Management (CIKM).   ACM, 2020, pp. 1325–1334.
  26. A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang, “Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning,” in USENIX Security Symposium (USENIX Security).   USENIX, 2020, pp. 1291–1308.
  27. A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes, “ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models,” in Network and Distributed System Security Symposium (NDSS).   Internet Society, 2019.
  28. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, “Collective Classification in Network Data,” AI Magazine, 2008.
  29. Y. Shen, X. He, Y. Han, and Y. Zhang, “Model Stealing Attacks Against Inductive Graph Neural Networks,” in IEEE Symposium on Security and Privacy (S&P).   IEEE, 2022, pp. 1175–1192.
  30. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership Inference Attacks Against Machine Learning Models,” in IEEE Symposium on Security and Privacy (S&P).   IEEE, 2017, pp. 3–18.
  31. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” in International Conference on Learning Representations (ICLR), 2018.
  32. B. Wang and N. Z. Gong, “Attacking Graph-based Classification via Manipulating the Graph Structure,” in ACM SIGSAC Conference on Computer and Communications Security (CCS).   ACM, 2019, pp. 2023–2040.
  33. M. Waniek, K. Zhou, Y. Vorobeychik, E. Moro, T. P. Michalak, and T. Rahwan, “Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network,” CoRR abs/1809.00152, 2018.
  34. B. Wu, X. Yang, S. Pan, and X. Yuan, “Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization,” CoRR abs/2010.12751, 2020.
  35. ——, “Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications,” in International Conference on Data Mining (ICDM).   IEEE, 2021.
  36. F. Wu, Y. Long, C. Zhang, and B. Li, “LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis,” in IEEE Symposium on Security and Privacy (S&P).   IEEE, 2022, pp. 2005–2024.
  37. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial Examples for Graph Data: Deep Insights into Attack and Defense,” in International Joint Conferences on Artifical Intelligence (IJCAI).   IJCAI, 2019, pp. 4816–4823.
  38. Z. Xi, R. Pang, S. Ji, and T. Wang, “Graph Backdoor,” in USENIX Security Symposium (USENIX Security).   USENIX, 2021.
  39. K. Xu, H. Chen, S. Liu, P. Chen, T. Weng, M. Hong, and X. Lin, “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective,” in International Joint Conferences on Artifical Intelligence (IJCAI).   IJCAI, 2019, pp. 3961–3967.
  40. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How Powerful are Graph Neural Networks?” in International Conference on Learning Representations (ICLR), 2019.
  41. H. Zhang, B. Wu, S. Wang, X. Yang, M. Xue, S. Pan, and X. Yuan, “Demystifying Uneven Vulnerability of Link Stealing Attacks against Graph Neural Networks,” in International Conference on Machine Learning (ICML).   PMLR, 2023.
  42. Z. Zhang, J. Jia, B. Wang, and N. Z. Gong, “Backdoor Attacks to Graph Neural Networks,” in ACM Symposium on Access Control Models and Technologies (SACMAT).   ACM, 2021, pp. 15–26.
  43. Z. Zhang, M. Chen, M. Backes, Y. Shen, and Y. Zhang, “Inference Attacks Against Graph Neural Networks,” in USENIX Security Symposium (USENIX Security).   USENIX, 2022, pp. 4543–4560.
  44. D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial Attacks on Neural Networks for Graph Data,” in ACM Conference on Knowledge Discovery and Data Mining (KDD).   ACM, 2018, pp. 2847–2856.
  45. D. Zügner and S. Günnemann, “Adversarial Attacks on Graph Neural Networks via Meta Learning,” in International Conference on Learning Representations (ICLR), 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yixin Wu (18 papers)
  2. Xinlei He (58 papers)
  3. Pascal Berrang (10 papers)
  4. Mathias Humbert (19 papers)
  5. Michael Backes (157 papers)
  6. Neil Zhenqiang Gong (117 papers)
  7. Yang Zhang (1129 papers)
Citations (1)