Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks (2406.09836v1)

Published 14 Jun 2024 in cs.LG and cs.CR

Abstract: Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification. However, recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption. Despite initial efforts to defend against specific graph backdoor attacks, there is no work on defending against various types of backdoor attacks where generated triggers have different properties. Hence, we first empirically verify that prediction variance under edge dropping is a crucial indicator for identifying poisoned nodes. With this observation, we propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones. Furthermore, we introduce a novel robust training strategy to efficiently counteract the impact of the triggers. Extensive experiments on real-world datasets show that our framework can effectively identify poisoned nodes, significantly degrade the attack success rate, and maintain clean accuracy when defending against various types of graph backdoor attacks with different properties.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhiwei Zhang (76 papers)
  2. Minhua Lin (15 papers)
  3. Junjie Xu (23 papers)
  4. Zongyu Wu (15 papers)
  5. Enyan Dai (32 papers)
  6. Suhang Wang (118 papers)

Summary

We haven't generated a summary for this paper yet.