Adversarial Camouflage for Node Injection Attack on Graphs (2208.01819v4)
Abstract: Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates. However, our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes. To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods. Unfortunately, the non-Euclidean structure of graph data and the lack of intuitive prior present great challenges to the formalization, implementation, and evaluation of camouflage. In this paper, we first propose and define camouflage as distribution similarity between ego networks of injected nodes and normal nodes. Then for implementation, we propose an adversarial CAmouflage framework for Node injection Attack, namely CANA, to improve attack performance under defense/detection methods in practical scenarios. A novel camouflage metric is further designed under the guide of distribution similarity. Extensive experiments demonstrate that CANA can significantly improve the attack performance under defense/detection methods with higher camouflage or imperceptibility. This work urges us to raise awareness of the security vulnerabilities of GNNs in practical applications.
- An introduction to outlier analysis, in: Outlier analysis. Springer, pp. 1–34.
- Popularity prediction on social platforms with coupled graph neural networks, in: Proceedings of the 13th International Conference on Web Search and Data Mining (WSDM 2020), pp. 70–78.
- Towards evaluating the robustness of neural networks, in: 2017 IEEE Symposium on Security and Privacy.
- MAG-GAN: massive attack generator via GAN. Information Sciences 536, 67–90.
- Understanding and improving graph injection attack by promoting unnoticeability, in: International Conference on Learning Representations (ICLR 2022).
- Adversarial attack on graph structured data, in: Proceedings of the 35th International Conference on Machine Learning (ICML 2018), pp. 1123–1132.
- Centered graphs and the structure of ego networks. Math. Soc. Sci. 3, 291–304.
- Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm. KI-2012: poster and demo track 9.
- Generative adversarial nets, in: Advances in neural information processing systems (NeurIPS 2014), pp. 2672–2680.
- Early prediction for mode anomaly in generative adversarial network training: An empirical study. Information Sciences 534, 117–138.
- Using negative detectors for identifying adversarial data manipulation in machine learning, in: 2021 International Joint Conference on Neural Networks (IJCNN 2021), IEEE. pp. 1–8.
- Adbench: Anomaly detection benchmark. arXiv preprint arXiv:2206.09426 .
- Gans trained by a two time-scale update rule converge to a local nash equilibrium, in: Advances in Neural Information Processing Systems (NeurIPS 2017), pp. 6626–6637.
- Open graph benchmark: Datasets for machine learning on graphs, in: Advances in Neural Information Processing Systems Workshops.
- Signed bipartite graph neural networks, in: Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM 2021), pp. 740–749.
- Node similarity preserving graph convolutional networks, in: Proceedings of the 14th International Conference on Web Search and Data Mining (WSDM 2021), pp. 148–156.
- Adversarial attacks and defenses on graphs: A review and empirical study. SIGKDD Exploration Newsletter 22, 19–34.
- Semi-supervised classification with graph convolutional networks, in: International Conference on Learning Representations (ICLR 2017).
- Robust optimization as data augmentation for large-scale graphs, in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR 2022), pp. 60–69.
- HAPGNN: Hop-wise attentive PageRank-Based graph neural network. Information Sciences 613, 435–452.
- Spectral adversarial training for robust graph neural network. IEEE Transactions on Knowledge and Data Engineering. Just accepted .
- COPOD: copula-based outlier detection, in: 20th IEEE International Conference on Data Mining (ICDM 2020), pp. 1118–1123.
- SocialLGN: Light graph convolution network for social recommendation. Information Sciences 589, 595–607.
- Isolation forest, in: 2008 IEEE International Conference on Data Mining (ICDM 2008), pp. 413–422.
- Greedyfool: Multi-factor imperceptibility and its application to designing a black-box adversarial attack. Information Sciences 613, 717–730.
- Towards deep learning models resistant to adversarial attacks, in: International Conference on Learning Representations (ICLR 2018).
- Mode seeking generative adversarial networks for diverse image synthesis, in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR 2019), pp. 1429–1437.
- A novel anomaly detection scheme based on principal component classifier. Technical Report. Miami Univ Coral Gables Fl Dept of Electrical and Computer Engineering.
- Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528 .
- Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach, in: Proceedings of The Web Conference (WWW 2020), pp. 673–683.
- Single node injection attack against graph neural networks, in: Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM 2021), p. 1794–1803.
- Graph adversarial immunization for certifiable robustness. arXiv preprint arXiv:2302.08051 .
- IDEA: invariant causal defense for graph adversarial robustness. arXiv preprint arXiv:2305.15792 .
- Adversarial immunization for certifiable robustness on graphs, in: Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM 2021), pp. 698–706.
- To see further: Knowledge graph-aware deep graph convolutional network for recommender systems. Information Sciences, Available online .
- Scalable attack on graph data by injecting vicious nodes. arXiv preprint arXiv:2004.13825 .
- Simplifying graph convolutional networks, in: Proceedings of the 36th International Conference on Machine Learning (ICML 2019), pp. 6861–6871.
- Handling distribution shifts on graphs: An invariance perspective, in: International Conference on Learning Representations (ICLR 2022).
- ERGCN: Data enhancement-based robust graph convolutional network against adversarial attacks. Information Sciences 617, 234–253.
- INMO: A model-agnostic and scalable module for inductive collaborative filtering, in: Proceedings of the 45rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), pp. 91–101.
- Parameter discrepancy hypothesis: adversarial attack for graph data. Information Sciences 577, 234–244.
- How powerful are graph neural networks?, in: International Conference on Learning Representations (ICLR 2019).
- The unreasonable effectiveness of deep features as a perceptual metric, in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR 2018), pp. 586–595.
- Gnnguard: Defending graph neural networks against adversarial attacks, in: Proceedings of Neural Information Processing Systems (NeurIPS 2020), pp. 9263–9275.
- Improving the invisibility of adversarial examples with perceptually adaptive perturbation. Information Sciences 635, 126–137.
- Similarity-navigated graph neural networks for node classification. Information Sciences 633, 41–69.
- DGSLN: Differentiable graph structure learning neural network for robust graph representations. Information Sciences 626, 94–113.
- Tdgia: Effective injection attacks on graph neural networks, in: Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2021), p. 2461–2471.
- Adversarial attacks on neural networks for graph data, in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2018), pp. 2847–2856.
- Adversarial attacks on graph neural networks via meta learning, in: International Conference on Learning Representations (ICLR 2019).