A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only (2404.12704v1)
Abstract: Graph Convolutional Networks (GCNs) have shown excellent performance in dealing with various graph structures such as node classification, graph classification and other tasks. However,recent studies have shown that GCNs are vulnerable to a novel threat known as backdoor attacks. However, all existing backdoor attacks in the graph domain require modifying the training samples to accomplish the backdoor injection, which may not be practical in many realistic scenarios where adversaries have no access to modify the training samples and may leads to the backdoor attack being detected easily. In order to explore the backdoor vulnerability of GCNs and create a more practical and stealthy backdoor attack method, this paper proposes a clean-graph backdoor attack against GCNs (CBAG) in the node classification task,which only poisons the training labels without any modification to the training samples, revealing that GCNs have this security vulnerability. Specifically, CBAG designs a new trigger exploration method to find important feature dimensions as the trigger patterns to improve the attack performance. By poisoning the training labels, a hidden backdoor is injected into the GCNs model. Experimental results show that our clean graph backdoor can achieve 99% attack success rate while maintaining the functionality of the GCNs model on benign samples.
- Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
- Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757–1768, 2012.
- Sequential graph neural network for urban road traffic speed prediction. IEEE Access, 8:63349–63358, 2019.
- Prediction of structural and functional features of protein and nucleic acid sequences by artificial neural networks. Biochemistry, 31(32):7211–7218, 1992.
- Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
- Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21), pages 1523–1540, 2021.
- Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, pages 15–26, 2021.
- Explainability-based backdoor attacks against graph neural networks. In Proceedings of the 3rd ACM workshop on wireless security and machine learning, pages 31–36, 2021.
- Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023, pages 2263–2273, 2023.
- Motif-backdoor: Rethinking the backdoor attack on graph neural networks via motifs. IEEE Transactions on Computational Social Systems, 2023.
- Transferable graph backdoor attack. In Proceedings of the 25th international symposium on research in attacks, intrusions and defenses, pages 321–332, 2022.
- Feature-based graph backdoor attack in the node classification task. International Journal of Intelligent Systems, 2023, 2023.
- A semantic backdoor attack against graph convolutional networks. arXiv preprint arXiv:2302.14353, 2023.
- Percba: Persistent clean-label backdoor attacks on semi-supervised graph node classification. 2023.
- A survey of adversarial learning on graphs. arXiv preprint arXiv:2003.05730, 2020.
- A targeted universal attack on graph convolutional network by using fake nodes. Neural Processing Letters, 54(4):3321–3337, 2022.
- Adversarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018.
- Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020, pages 673–683, 2020.
- Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018.
- Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5):1–31, 2020.
- Adversarial label-flipping attack and defense for graph neural networks. In 2020 IEEE International Conference on Data Mining (ICDM), pages 791–800. IEEE, 2020.
- Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
- A backdoor attack against lstm-based text classification systems. IEEE Access, 7:138872–138878, 2019.
- Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749, 2021.
- Jiazhu Dai (11 papers)
- Haoyu Sun (15 papers)