Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective (2405.10757v3)
Abstract: Graph Neural Networks (GNNs) have shown remarkable performance in various tasks. However, recent works reveal that GNNs are vulnerable to backdoor attacks. Generally, backdoor attack poisons the graph by attaching backdoor triggers and the target class label to a set of nodes in the training graph. A GNN trained on the poisoned graph will then be misled to predict test nodes attached with trigger to the target class. Despite their effectiveness, our empirical analysis shows that triggers generated by existing methods tend to be out-of-distribution (OOD), which significantly differ from the clean data. Hence, these injected triggers can be easily detected and pruned with widely used outlier detection methods in real-world applications. Therefore, in this paper, we study a novel problem of unnoticeable graph backdoor attacks with in-distribution (ID) triggers. To generate ID triggers, we introduce an OOD detector in conjunction with an adversarial learning strategy to generate the attributes of the triggers within distribution. To ensure a high attack success rate with ID triggers, we introduce novel modules designed to enhance trigger memorization by the victim model trained on poisoned graph. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed method in generating in distribution triggers that can by-pass various defense strategies while maintaining a high attack success rate.
- A survey of anomaly detection techniques in financial domain. Future Generation Computer Systems 55 (2016), 278–288.
- Outlier Aware Network Embedding for Attributed Networks. arXiv:1811.07609 [cs.SI]
- Proflip: Targeted trojan attack with progressive bit flips. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7718–7727.
- Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv:1712.05526 [cs.CR]
- Generative Adversarial Attributed Network Anomaly Detection. Proceedings of the 29th ACM International Conference on Information & Knowledge Management (2020). https://api.semanticscholar.org/CorpusID:224271231
- Enyan Dai and Jie Chen. 2022. Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series. arXiv:2202.07857 [cs.LG]
- Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels. arXiv:2201.00232 [cs.LG]
- Unnoticeable Backdoor Attacks on Graph Neural Networks. In Proceedings of the ACM Web Conference 2023 (WWW ’23). ACM. https://doi.org/10.1145/3543507.3583392
- Enyan Dai and Suhang Wang. 2021. Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information. arXiv:2009.01454 [cs.LG]
- Deep Anomaly Detection on Attributed Networks. In SIAM International Conference on Data Mining (SDM).
- Backdoor Attack with Imperceptible Input and Latent Modification. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 18944–18957. https://proceedings.neurips.cc/paper_files/paper/2021/file/9d99197e2ebf03fc388d09f1e94af89b-Paper.pdf
- AnomalyDAE: Dual autoencoder for anomaly detection on attributed networks. arXiv:2002.03665 [cs.LG]
- Graph Neural Networks for Social Recommendation. arXiv:1902.07243 [cs.IR]
- Generative Adversarial Networks. arXiv:1406.2661 [stat.ML]
- BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 7 (2019), 47230–47244. https://doi.org/10.1109/ACCESS.2019.2909068
- Inductive Representation Learning on Large Graphs. arXiv:1706.02216 [cs.SI]
- Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 22118–22133. https://proceedings.neurips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Paper.pdf
- CoDetect: Financial Fraud Detection With Anomaly Feature Detection. IEEE Access 6 (2018), 19161–19174. https://doi.org/10.1109/ACCESS.2018.2816564
- Diederik P Kingma and Max Welling. 2022. Auto-Encoding Variational Bayes. arXiv:1312.6114 [stat.ML]
- Thomas N. Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv:1611.07308 [stat.ML]
- Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907 [cs.LG]
- Weight Poisoning Attacks on Pre-trained Models. arXiv:2004.06660 [cs.LG]
- Backdoor Learning: A Survey. arXiv:2007.08745 [cs.CR]
- Invisible Backdoor Attack with Sample-Specific Triggers. arXiv:2012.03816 [cs.CR]
- Certifiably Robust Graph Contrastive Learning. arXiv:2310.03312 [cs.CR]
- BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 27021–27035. https://proceedings.neurips.cc/paper_files/paper/2022/file/acc1ec4a9c780006c9aafd595104816b-Paper-Datasets_and_Benchmarks.pdf
- Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks. arXiv:2007.02343 [cs.CV]
- Federated Social Recommendation with Graph Neural Network. ACM Transactions on Intelligent Systems and Technology 13, 4 (Aug. 2022), 1–24. https://doi.org/10.1145/3501815
- A Comprehensive Survey on Graph Anomaly Detection With Deep Learning. IEEE Transactions on Knowledge and Data Engineering 35, 12 (Dec. 2023), 12012–12038. https://doi.org/10.1109/tkde.2021.3118815
- Molecular Geometry Prediction using a Deep Generative Graph Neural Network. Scientific Reports 9, 1 (Dec. 2019). https://doi.org/10.1038/s41598-019-56773-5
- Collective Classification in Network Data. AI Magazine 29, 3 (Sep. 2008), 93. https://doi.org/10.1609/aimag.v29i3.2157
- Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020. 673–683.
- An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks. arXiv:2006.08131 [cs.CR]
- Deep learning for unsupervised insider threat detection in structured cybersecurity data streams. arXiv preprint arXiv:1710.00811 (2017).
- Graph Attention Networks. arXiv:1710.10903 [stat.ML]
- Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21). 1523–1540.
- HP-GMN: Graph Memory Networks for Heterophilous Graphs. arXiv:2210.08195 [cs.LG]
- How Powerful are Graph Neural Networks? arXiv:1810.00826 [cs.LG]
- Contrastive Attributed Network Anomaly Detection with Data Augmentation. In Advances in Knowledge Discovery and Data Mining (PAKDD). 444–457.
- Revisiting Semi-Supervised Learning with Graph Embeddings. arXiv:1603.08861 [cs.LG]
- Graph Contrastive Learning with Augmentations. arXiv:2010.13902 [cs.LG]
- GraphSAINT: Graph Sampling Based Inductive Learning Method. arXiv:1907.04931 [cs.LG]
- Muhan Zhang and Yixin Chen. 2018. Link Prediction Based on Graph Neural Networks. arXiv:1802.09691 [cs.LG]
- Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 15–26.
- ProtGNN: Towards Self-Explaining Graph Neural Networks. arXiv:2112.00911 [cs.LG]
- GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM ’21). ACM. https://doi.org/10.1145/3437963.3441720
- Graph Neural Networks: Taxonomy, Advances, and Trends. ACM Transactions on Intelligent Systems and Technology 13, 1 (Jan. 2022), 1–54. https://doi.org/10.1145/3495161
- Fairness-aware Message Passing for Graph Neural Networks. arXiv:2306.11132 [cs.LG]
- Deep Graph Contrastive Representation Learning. arXiv:2006.04131 [cs.LG]
- Zhiwei Zhang (76 papers)
- Minhua Lin (15 papers)
- Enyan Dai (32 papers)
- Suhang Wang (118 papers)