Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective (2405.10757v3)

Published 17 May 2024 in cs.LG and cs.CR

Abstract: Graph Neural Networks (GNNs) have shown remarkable performance in various tasks. However, recent works reveal that GNNs are vulnerable to backdoor attacks. Generally, backdoor attack poisons the graph by attaching backdoor triggers and the target class label to a set of nodes in the training graph. A GNN trained on the poisoned graph will then be misled to predict test nodes attached with trigger to the target class. Despite their effectiveness, our empirical analysis shows that triggers generated by existing methods tend to be out-of-distribution (OOD), which significantly differ from the clean data. Hence, these injected triggers can be easily detected and pruned with widely used outlier detection methods in real-world applications. Therefore, in this paper, we study a novel problem of unnoticeable graph backdoor attacks with in-distribution (ID) triggers. To generate ID triggers, we introduce an OOD detector in conjunction with an adversarial learning strategy to generate the attributes of the triggers within distribution. To ensure a high attack success rate with ID triggers, we introduce novel modules designed to enhance trigger memorization by the victim model trained on poisoned graph. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed method in generating in distribution triggers that can by-pass various defense strategies while maintaining a high attack success rate.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. A survey of anomaly detection techniques in financial domain. Future Generation Computer Systems 55 (2016), 278–288.
  2. Outlier Aware Network Embedding for Attributed Networks. arXiv:1811.07609 [cs.SI]
  3. Proflip: Targeted trojan attack with progressive bit flips. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7718–7727.
  4. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv:1712.05526 [cs.CR]
  5. Generative Adversarial Attributed Network Anomaly Detection. Proceedings of the 29th ACM International Conference on Information & Knowledge Management (2020). https://api.semanticscholar.org/CorpusID:224271231
  6. Enyan Dai and Jie Chen. 2022. Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series. arXiv:2202.07857 [cs.LG]
  7. Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels. arXiv:2201.00232 [cs.LG]
  8. Unnoticeable Backdoor Attacks on Graph Neural Networks. In Proceedings of the ACM Web Conference 2023 (WWW ’23). ACM. https://doi.org/10.1145/3543507.3583392
  9. Enyan Dai and Suhang Wang. 2021. Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information. arXiv:2009.01454 [cs.LG]
  10. Deep Anomaly Detection on Attributed Networks. In SIAM International Conference on Data Mining (SDM).
  11. Backdoor Attack with Imperceptible Input and Latent Modification. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 18944–18957. https://proceedings.neurips.cc/paper_files/paper/2021/file/9d99197e2ebf03fc388d09f1e94af89b-Paper.pdf
  12. AnomalyDAE: Dual autoencoder for anomaly detection on attributed networks. arXiv:2002.03665 [cs.LG]
  13. Graph Neural Networks for Social Recommendation. arXiv:1902.07243 [cs.IR]
  14. Generative Adversarial Networks. arXiv:1406.2661 [stat.ML]
  15. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 7 (2019), 47230–47244. https://doi.org/10.1109/ACCESS.2019.2909068
  16. Inductive Representation Learning on Large Graphs. arXiv:1706.02216 [cs.SI]
  17. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 22118–22133. https://proceedings.neurips.cc/paper_files/paper/2020/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Paper.pdf
  18. CoDetect: Financial Fraud Detection With Anomaly Feature Detection. IEEE Access 6 (2018), 19161–19174. https://doi.org/10.1109/ACCESS.2018.2816564
  19. Diederik P Kingma and Max Welling. 2022. Auto-Encoding Variational Bayes. arXiv:1312.6114 [stat.ML]
  20. Thomas N. Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv:1611.07308 [stat.ML]
  21. Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907 [cs.LG]
  22. Weight Poisoning Attacks on Pre-trained Models. arXiv:2004.06660 [cs.LG]
  23. Backdoor Learning: A Survey. arXiv:2007.08745 [cs.CR]
  24. Invisible Backdoor Attack with Sample-Specific Triggers. arXiv:2012.03816 [cs.CR]
  25. Certifiably Robust Graph Contrastive Learning. arXiv:2310.03312 [cs.CR]
  26. BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 27021–27035. https://proceedings.neurips.cc/paper_files/paper/2022/file/acc1ec4a9c780006c9aafd595104816b-Paper-Datasets_and_Benchmarks.pdf
  27. Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks. arXiv:2007.02343 [cs.CV]
  28. Federated Social Recommendation with Graph Neural Network. ACM Transactions on Intelligent Systems and Technology 13, 4 (Aug. 2022), 1–24. https://doi.org/10.1145/3501815
  29. A Comprehensive Survey on Graph Anomaly Detection With Deep Learning. IEEE Transactions on Knowledge and Data Engineering 35, 12 (Dec. 2023), 12012–12038. https://doi.org/10.1109/tkde.2021.3118815
  30. Molecular Geometry Prediction using a Deep Generative Graph Neural Network. Scientific Reports 9, 1 (Dec. 2019). https://doi.org/10.1038/s41598-019-56773-5
  31. Collective Classification in Network Data. AI Magazine 29, 3 (Sep. 2008), 93. https://doi.org/10.1609/aimag.v29i3.2157
  32. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020. 673–683.
  33. An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks. arXiv:2006.08131 [cs.CR]
  34. Deep learning for unsupervised insider threat detection in structured cybersecurity data streams. arXiv preprint arXiv:1710.00811 (2017).
  35. Graph Attention Networks. arXiv:1710.10903 [stat.ML]
  36. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21). 1523–1540.
  37. HP-GMN: Graph Memory Networks for Heterophilous Graphs. arXiv:2210.08195 [cs.LG]
  38. How Powerful are Graph Neural Networks? arXiv:1810.00826 [cs.LG]
  39. Contrastive Attributed Network Anomaly Detection with Data Augmentation. In Advances in Knowledge Discovery and Data Mining (PAKDD). 444–457.
  40. Revisiting Semi-Supervised Learning with Graph Embeddings. arXiv:1603.08861 [cs.LG]
  41. Graph Contrastive Learning with Augmentations. arXiv:2010.13902 [cs.LG]
  42. GraphSAINT: Graph Sampling Based Inductive Learning Method. arXiv:1907.04931 [cs.LG]
  43. Muhan Zhang and Yixin Chen. 2018. Link Prediction Based on Graph Neural Networks. arXiv:1802.09691 [cs.LG]
  44. Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 15–26.
  45. ProtGNN: Towards Self-Explaining Graph Neural Networks. arXiv:2112.00911 [cs.LG]
  46. GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM ’21). ACM. https://doi.org/10.1145/3437963.3441720
  47. Graph Neural Networks: Taxonomy, Advances, and Trends. ACM Transactions on Intelligent Systems and Technology 13, 1 (Jan. 2022), 1–54. https://doi.org/10.1145/3495161
  48. Fairness-aware Message Passing for Graph Neural Networks. arXiv:2306.11132 [cs.LG]
  49. Deep Graph Contrastive Representation Learning. arXiv:2006.04131 [cs.LG]
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhiwei Zhang (76 papers)
  2. Minhua Lin (15 papers)
  3. Enyan Dai (32 papers)
  4. Suhang Wang (118 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.