Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A backdoor attack against link prediction tasks with graph neural networks (2401.02663v1)

Published 5 Jan 2024 in cs.LG, cs.AI, and cs.CR

Abstract: Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data, and they have demonstrated significant performance in a variety of real-world applications. Recent studies have found that GNN models are vulnerable to backdoor attacks. When specific patterns (called backdoor triggers, e.g., subgraphs, nodes, etc.) appear in the input data, the backdoor embedded in the GNN models is activated, which misclassifies the input data into the target class label specified by the attacker, whereas when there are no backdoor triggers in the input, the backdoor embedded in the GNN models is not activated, and the models work normally. Backdoor attacks are highly stealthy and expose GNN models to serious security risks. Currently, research on backdoor attacks against GNNs mainly focus on tasks such as graph classification and node classification, and backdoor attacks against link prediction tasks are rarely studied. In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs and reveal the existence of such security vulnerability in GNN models, which make the backdoored GNN models to incorrectly predict unlinked two nodes as having a link relationship when a trigger appear. The method uses a single node as the trigger and poison selected node pairs in the training graph, and then the backdoor will be embedded in the GNN models through the training process. In the inference stage, the backdoor in the GNN models can be activated by simply linking the trigger node to the two end nodes of the unlinked node pairs in the input data, causing the GNN models to produce incorrect link prediction results for the target node pairs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Network analysis in the social sciences. science, 323(5916):892–895, 2009.
  2. Sequential graph neural network for urban road traffic speed prediction. IEEE Access, 8:63349–63358, 2019.
  3. Knowledge graphs. ACM Computing Surveys (Csur), 54(4):1–37, 2021.
  4. Prediction of structural and functional features of protein and nucleic acid sequences by artificial neural networks. Biochemistry, 31(32):7211–7218, 1992.
  5. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
  6. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  7. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
  8. Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407, 2018.
  9. Applications of link prediction in social networks: A review. Journal of Network and Computer Applications, 166:102716, 2020.
  10. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2015.
  11. Evaluation of different biological data and computational classification methods for use in protein interaction prediction. Proteins: Structure, Function, and Bioinformatics, 63(3):490–500, 2006.
  12. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21), pages 1523–1540, 2021.
  13. Transferable graph backdoor attack. In Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses, pages 321–332, 2022.
  14. Motif-backdoor: Rethinking the backdoor attack on graph neural networks via motifs. IEEE Transactions on Computational Social Systems, 2023.
  15. A semantic backdoor attack against graph convolutional networks. arXiv preprint arXiv:2302.14353, 2023.
  16. Explainability-based backdoor attacks against graph neural networks. In Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, pages 31–36, 2021.
  17. Neighboring backdoor attacks on graph convolutional network. arXiv preprint arXiv:2201.06202, 2022.
  18. Feature-based graph backdoor attack in the node classification task. International Journal of Intelligent Systems, 2023, 2023.
  19. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023, pages 2263–2273, 2023.
  20. Link-backdoor: Backdoor attack on link prediction via node injection. IEEE Transactions on Computational Social Systems, 2023.
  21. Dyn-backdoor: Backdoor attack on dynamic link prediction. IEEE Transactions on Network Science and Engineering, 2023.
  22. Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  23. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  24. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
  25. A backdoor attack against lstm-based text classification systems. IEEE Access, 7:138872–138878, 2019.
  26. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. arXiv preprint arXiv:2105.12400, 2021.
  27. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018.
  28. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797, 2018.
  29. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018.
  30. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610, 2019.
  31. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214, 2019.
  32. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5):1–31, 2020.
  33. A targeted universal attack on graph convolutional network by using fake nodes. Neural Processing Letters, 54(4):3321–3337, 2022.
  34. Single node injection attack against graph neural networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 1794–1803, 2021.
  35. Evasion attacks to graph neural networks via influence function. arXiv preprint arXiv:2009.00203, 2020.
  36. Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery, 34:1363–1389, 2020.
  37. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  38. Understanding and improving graph injection attack by promoting unnoticeability. arXiv preprint arXiv:2202.08057, 2022.
  39. Camouflaged poisoning attack on graph neural networks. In Proceedings of the 2022 International Conference on Multimedia Retrieval, pages 451–461, 2022.
  40. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127–163, 2000.
  41. Citeseer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries, pages 89–98, 1998.
  42. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52, 2015.
  43. Using auc and accuracy in evaluating learning algorithms. IEEE Transactions on knowledge and Data Engineering, 17(3):299–310, 2005.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jiazhu Dai (11 papers)
  2. Haoyu Sun (15 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.