Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience (2306.06909v5)
Abstract: End-to-end training with global optimization have popularized graph neural networks (GNNs) for node classification, yet inadvertently introduced vulnerabilities to adversarial edge-perturbing attacks. Adversaries can exploit the inherent opened interfaces of GNNs' input and output, perturbing critical edges and thus manipulating the classification results. Current defenses, due to their persistent utilization of global-optimization-based end-to-end training schemes, inherently encapsulate the vulnerabilities of GNNs. This is specifically evidenced in their inability to defend against targeted secondary attacks. In this paper, we propose the Graph Agent Network (GAgN) to address the aforementioned vulnerabilities of GNNs. GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent. Through the decentralized interactions between agents, they can learn to infer global perceptions to perform tasks including inferring embeddings, degrees and neighbor relationships for given nodes. This empowers nodes to filtering adversarial edges while carrying out classification tasks. Furthermore, agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks. We prove that single-hidden-layer multilayer perceptrons (MLPs) are theoretically sufficient to achieve these functionalities. Experimental results show that GAgN effectively implements all its intended capabilities and, compared to state-of-the-art defenses, achieves optimal classification accuracy on the perturbed datasets.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. 5th Int. Conf. Learn. Represent., 2017.
- W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. 31st Adv. Neural Inf. Proces. Syst., 2017.
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” in Proc. 5th Int. Conf. Learn. Represent., 2017.
- W. Chen, F. Feng, Q. Wang, X. He, C. Song, G. Ling, and Y. Zhang, “Catgcn: Graph convolutional networks with categorical node features,” IEEE Trans. Knowl. Data Eng., 2021.
- S. Nadal, A. Abelló, O. Romero, S. Vansummeren, and P. Vassiliadis, “Graph-driven federated data management,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 1, pp. 509–520, 2021.
- Y. Zhao, H. Zhou, A. Zhang, R. Xie, Q. Li, and F. Zhuang, “Connecting embeddings based on multiplex relational graph attention networks for knowledge graph entity typing,” IEEE Trans. Knowl. Data Eng., 2022.
- D. Berberidis and G. B. Giannakis, “Node embedding with adaptive similarities for scalable learning over graphs,” IEEE Trans. Knowl. Data Eng., vol. 33, no. 2, pp. 637–650, 2019.
- H. Xiao, Y. Chen, and X. Shi, “Knowledge graph embedding based on multi-view clustering framework,” IEEE Trans. Knowl. Data Eng., vol. 33, no. 2, pp. 585–596, 2019.
- S. Kreps and D. Kriner, “Model uncertainty, political contestation, and public trust in science: Evidence from the COVID-19 pandemic,” Sci. Adv., vol. 6, no. 43, p. eabd4563, Oct. 2020.
- W. Walt, C. Jack, and T. Christof, “Adversarial explanations for understanding image classification decisions and improved neural network robustness,” Nat. Mach. Intell., vol. 1, pp. 508––516, Nov. 2019.
- F. Samuel, G, B. John, D, I. Joichi, Z. Jonathan, L, B. Andrew, L, and K. Isaac, S, “Adversarial attacks on medical machine learning,” Science, vol. 363, no. 6433, pp. 1287–1289, Mar. 2019.
- L. Sun, Y. Dou, C. Yang, K. Zhang, J. Wang, S. Y. Philip, L. He, and B. Li, “Adversarial attack and defense on graph data: A survey,” IEEE Trans. Knowl. Data Eng., 2022.
- F. Feng, X. He, J. Tang, and T.-S. Chua, “Graph adversarial training: Dynamically regularizing based on graph structure,” IEEE Trans. Knowl. Data. Eng., vol. 33, no. 6, pp. 2493–2504, 2019.
- D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., Aug. 2019, pp. 1399–1407.
- N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in Proc. 13th Int. Conf. Web Search Data Min., 2020, pp. 169–177.
- K. Li, Y. Liu, X. Ao, J. Chi, J. Feng, H. Yang, and Q. He, “Reliable representations make a stronger defender: Unsupervised structure refinement for robust gnn,” in Proc. 28th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2022.
- A. Loukas, “What graph neural networks cannot learn: depth vs width,” in Proc. 7th Int. Conf. Learn. Represent., May 2019.
- K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in Proc. 7th Int. Conf. Learn. Represent., 2019.
- X. Liu, W. Jin, Y. Ma, Y. Li, H. Liu, Y. Wang, M. Yan, and J. Tang, “Elastic graph neural networks,” in Proc. 38th Int. Conf. Mach. Learn., 2021, pp. 6837–6849.
- H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples for graph data: deep insights into attack and defense,” in Proc. 28th Int. Joint Conf. Artif. Intel., 2019.
- W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in 26th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2020, pp. 66–74.
- A. Liu, B. Li, T. Li, P. Zhou, and R. Wang, “An-gcn: An anonymous graph convolutional network against edge-perturbing attacks,” IEEE Trans. Neural Netw. Learn. Syst., 2022.
- S. Geisler, D. Zügner, and S. Günnemann, “Reliable graph neural networks via robust aggregation,” in Proc. 34th Adv. Neural Inf. Proces. Syst., 2020, pp. 13 272–13 284.
- S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan, S. Mukherjee, V. Garg, R. Sarveswara, K. Händler, P. Pickkers, N. A. Aziz et al., “Swarm learning for decentralized and confidential clinical machine learning,” Nature, vol. 594, no. 7862, pp. 265–270, 2021.
- N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Commun. Surv. Tutor., vol. 21, no. 4, pp. 3133–3174, 2019.
- Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in Proc. IEEE Infocom., 2019, pp. 2512–2520.
- O. L. Saldanha, P. Quirke, N. P. West, J. A. James, M. B. Loughrey, H. I. Grabsch, M. Salto-Tellez, E. Alwers, D. Cifci, N. Ghaffari Laleh et al., “Swarm learning for decentralized artificial intelligence in cancer histopathology,” Nat. Med., vol. 28, no. 6, pp. 1232–1239, 2022.
- S. C. Bankes, “Agent-based modeling: A revolution?” Proc. Natl. Acad. Sci. U.S.A., vol. 99, no. suppl_3, pp. 7199–7200, 2002.
- A. Khodabandelu and J. Park, “Agent-based modeling and simulation in construction,” Autom. Constr., vol. 131, p. 103882, 2021.
- K. M. Carley, D. B. Fridsma, E. Casman, A. Yahja, N. Altman, L.-C. Chen, B. Kaminsky, and D. Nave, “Biowar: scalable agent-based model of bioattacks,” IEEE Trans. Syst. Man Cybern. Syst., vol. 36, no. 2, pp. 252–265, 2006.
- I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit et al., “Mlp-mixer: An all-mlp architecture for vision,” in Proc. 35th Adv. Neural Inf. Proces. Syst., vol. 34, 2021, pp. 24 261–24 272.
- H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial attack on graph structured data,” in Proc. 35th Int. Conf. Mach. Learn., Jul. 2018, pp. 1115–1124.
- D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., Jul. 2018, pp. 2847–2856.
- A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddings via graph poisoning,” in Proc. 36th Int. Conf. Mach. Learn., Jun. 2019, pp. 695–704.
- H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang, “A restricted black-box adversarial framework towards attacking graph embedding models,” in Proc. 34th AAAI Conf. Artif. Intell., vol. 34, no. 04, Feb. 2020, pp. 3389–3396.
- B. Wang and N. Z. Gong, “Attacking graph-based classification via manipulating the graph structure,” in Proc. 26th ACM Conf. Computer. Commun. Secur., Nov. 2019, pp. 2023–2040.
- J. Ma, S. Ding, and Q. Mei, “Towards more practical adversarial attacks on graph neural networks,” in Proc. 34th Adv. Neural Inf. Proces. Syst., Dec. 2020.
- X. Zhaohan, P. Ren, J. Shouling, and W. Ting, “Graph backdoor,” in Proc. 29th USENIX Secur. Symp., Aug. 2021.
- D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., Aug. 2019, pp. 246–256.
- Z. Deng, Y. Dong, and J. Zhu, “Batch virtual adversarial training for graph convolutional networks,” arXiv preprint arxiv:1902.09192, 2019.
- F. Feng, X. He, J. Tang, and T.-S. Chua, “Graph adversarial training: Dynamically regularizing based on graph structure,” IEEE Trans. Knowl. Data Eng., vol. 33, no. 6, pp. 2493–2504, Jun. 2021.
- X. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and S. Wang, “Transferring robustness for graph neural network against poisoning attacks,” in Proc. 13th ACM Int. Conf. Web Search Data Min., Nov. 2020, pp. 600–608.
- L. Peiyuan, Z. Han, X. Keyulu, J. Tommi, G. Geoffrey, J. Stefanie, and S. Ruslan, “Information obfuscation of graph neural networks,” Proc. 38th Int. Conf. Mach. Learn., Jul. 2021.
- X. B. Peng and M. Van De Panne, “Learning locomotion skills using deeprl: Does the choice of action space matter?” in ACM SIGGRAPH, 2017.
- E. Bonabeau, “Agent-based modeling: Methods and techniques for simulating human systems,” Proc. Natl. Acad. Sci. U.S.A., vol. 99, no. suppl_3, pp. 7280–7287, 2002.
- J. D. Farmer and D. Foley, “The economy needs agent-based modelling,” Nature, vol. 460, no. 7256, pp. 685–686, 2009.
- B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2014, pp. 701–710.
- D. Zhou, J. Huang, and B. Schölkopf, “Learning with hypergraphs: Clustering, classification, and embedding,” in Proc. 20th Adv. Neural Inf. Proces. Syst., 2006.
- M. Carreira-Perpinan and W. Wang, “Distributed optimization of deeply nested systems,” in Proc. Int. Conf. Artif. Intell. Statist. (AISTATS), 2014.
- E. Győri, G. Y. Katona, and N. Lemons, “Hypergraph extensions of the erdős-gallai theorem,” Eur. J. Comb., vol. 58, pp. 238–246, 2016.
- F. Gama, J. Bruna, and A. Ribeiro, “Stability properties of graph neural networks,” IEEE Trans. Signal Process., 2020.
- G. H. Golub and C. Reinsch, “Singular value decomposition and least squares solutions,” Linear algebra, 1971.
- G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl., 1999.
- Y. Sun, S. Wang, X. Tang, T.-Y. Hsieh, and V. Honavar, “Non-target-specific node injection attacks on graph neural networks: A hierarchical reinforcement learning approach,” in Proc. 29th Int. Conf. World Wide Web, vol. 3, Apr. 2020.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” J. Mach. Learn. Res., vol. 9, no. 11, pp. 2579–2605, 2008.