Papers
Topics
Authors
Recent
Search
2000 character limit reached

Revisiting Edge Perturbation for Graph Neural Network in Graph Data Augmentation and Attack

Published 10 Mar 2024 in cs.LG and cs.CR | (2403.07943v1)

Abstract: Edge perturbation is a basic method to modify graph structures. It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs), i.e., graph data augmentation and attack. Surprisingly, both veins of edge perturbation methods employ the same operations, yet yield opposite effects on GNNs' accuracy. A distinct boundary between these methods in using edge perturbation has never been clearly defined. Consequently, inappropriate perturbations may lead to undesirable outcomes, necessitating precise adjustments to achieve desired effects. Therefore, questions of why edge perturbation has a two-faced effect?'' andwhat makes edge perturbation flexible and effective?'' still remain unanswered. In this paper, we will answer these questions by proposing a unified formulation and establishing a clear boundary between two categories of edge perturbation methods. Specifically, we conduct experiments to elucidate the differences and similarities between these methods and theoretically unify the workflow of these methods by casting it to one optimization problem. Then, we devise Edge Priority Detector (EPD) to generate a novel priority metric, bridging these methods up in the workflow. Experiments show that EPD can make augmentation or attack flexibly and achieve comparable or superior performance to other counterparts with less time overhead.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (85)
  1. S. Guo, Y. Lin, N. Feng, C. Song, and H. Wan, “Attention based spatial-temporal graph convolutional networks for traffic flow forecasting,” in AAAI, vol. 33, no. 01, 2019, pp. 922–929.
  2. Z. Cui, K. Henrickson, R. Ke, and Y. Wang, “Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting,” IEEE TITS, vol. 21, no. 11, pp. 4883–4894, 2019.
  3. J. Scott, “Social network analysis: developments, advances, and prospects,” Social network analysis and mining, vol. 1, pp. 21–26, 2011.
  4. W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, “Graph neural networks for social recommendation,” in WWW, 2019, pp. 417–426.
  5. X. Shi, Z. Zheng, Y. Zhou, H. Jin, L. He, B. Liu, and Q.-S. Hua, “Graph processing on gpus: A survey,” ACM Computing Surveys, vol. 50, no. 6, pp. 1–35, 2018.
  6. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
  7. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE TNNLS, vol. 32, no. 1, pp. 4–24, 2020.
  8. Z. Zhang, P. Cui, and W. Zhu, “Deep learning on graphs: A survey,” IEEE TKDE, 2020.
  9. S. Abadal, A. Jain, R. Guirado, and et al., “Computing graph neural networks: A survey from algorithms to accelerators,” ACM Computing Surveys (CSUR), vol. 54, no. 9, pp. 1–38, 2021.
  10. S. Zhang, H. Tong, J. Xu, and R. Maciejewski, “Graph convolutional networks: a comprehensive review,” Computational Social Networks, vol. 6, no. 1, pp. 1–23, 2019.
  11. X. Liu, M. Yan, L. Deng, and et al., “Sampling methods for efficient training of graph convolutional networks: A survey,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 2, pp. 205–234, 2021.
  12. X. Liu, M. Yan, L. Deng, G. Li, X. Ye, D. Fan, S. Pan, and Y. Xie, “Survey on graph neural network acceleration: An algorithmic perspective,” arXiv preprint arXiv:2202.04822, 2022.
  13. X. Liu, M. Yan, L. Deng, G. Li, X. Ye, and D. Fan, “Sampling methods for efficient training of graph convolutional networks: A survey,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 2, pp. 205–234, 2021.
  14. H. Lin, M. Yan, X. Ye, D. Fan, S. Pan, W. Chen, and Y. Xie, “A comprehensive survey on distributed training of graph neural networks,” arXiv preprint arXiv:2211.05368, 2022.
  15. Z. Lv, M. Yan, X. Liu, M. Dong, X. Ye, D. Fan, and N. Sun, “A survey of graph pre-processing methods: From algorithmic to hardware perspectives,” arXiv preprint arXiv:2309.07581, 2023.
  16. X. Yang, M. Yan, S. Pan, X. Ye, and D. Fan, “Simple and efficient heterogeneous graph neural network,” in AAAI, 2023, pp. 10 816–10 824.
  17. J. Zhao, M. Qu, C. Li, H. Yan, Q. Liu, R. Li, X. Xie, and J. Tang, “Learning on large-scale text-attributed graphs via variational inference,” arXiv preprint arXiv:2210.14709, 2022.
  18. Y. Zhang, Q. Yao, and J. T. Kwok, “Bilinear scoring function search for knowledge graph learning,” IEEE TPAMI, 2022.
  19. S. Geisler, Y. Li, D. Mankowitz, A. T. Cemgil, S. Günnemann, and C. Paduraru, “Transformers meet directed graphs,” arXiv preprint arXiv:2302.00049, 2023.
  20. Z. Shao, Z. Zhang, W. Wei, F. Wang, Y. Xu, X. Cao, and C. S. Jensen, “Decoupled dynamic spatial-temporal graph neural network for traffic forecasting,” Proc. VLDB Endow., vol. 15, no. 11, pp. 2733–2746, 2022.
  21. K. Ding, Z. Xu, H. Tong, and H. Liu, “Data augmentation for deep graph learning: A survey,” ACM SIGKDD Explorations Newsletter, vol. 24, no. 2, pp. 61–77, 2022.
  22. T. Zhao, G. Liu, S. Günnemann, and M. Jiang, “Graph data augmentation for graph machine learning: A survey,” arXiv preprint arXiv:2202.08871, 2022.
  23. M. Adjeisah, X. Zhu, H. Xu, and T. A. Ayall, “Towards data augmentation in graph neural network: An overview and evaluation,” Computer Science Review, vol. 47, p. 100527, 2023.
  24. Y. Rong, W. Huang, T. Xu, and J. Huang, “Dropedge: Towards deep graph convolutional networks on node classification,” in ICLR, 2020.
  25. L. Sun, Y. Dou, C. Yang, K. Zhang, J. Wang, S. Y. Philip, L. He, and B. Li, “Adversarial attack and defense on graph data: A survey,” IEEE TKDE, 2022.
  26. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  27. W. L. Hamilton, R. Ying, and et al., “Inductive representation learning on large graphs,” in NIPS, 2017, pp. 1025–1035.
  28. G. Li, M. Muller, A. Thabet, and B. Ghanem, “Deepgcns: Can gcns go as deep as cnns?” in CVPR, 2019, pp. 9267–9276.
  29. W. Huang, T. Zhang, Y. Rong, and et al., “Adaptive sampling towards fast graph representation learning,” Advances in Neural Information Processing Systems, vol. 31, pp. 4558–4567, 2018.
  30. C. Zheng, B. Zong, W. Cheng, D. Song, J. Ni, W. Yu, H. Chen, and W. Wang, “Robust graph representation learning via neural sparsification,” in ICML.   PMLR, 2020, pp. 11 458–11 468.
  31. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  32. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in ICLR, 2019.
  33. J. Li, T. Zhang, H. Tian, S. Jin, M. Fardad, and R. Zafarani, “Sgcn: A graph sparsifier based on graph convolutional networks,” in PAKDD.   Springer, 2020, pp. 275–287.
  34. D. Li, T. Yang, L. Du, and et al., “Adaptivegcn: Efficient gcn through adaptively sparsifying graphs,” in CIKM, 2021, pp. 3206–3210.
  35. D. Luo, W. Cheng, W. Yu, B. Zong, J. Ni, H. Chen, and X. Zhang, “Learning to drop: Robust graph neural network via topological denoising,” in WSDM, 2021, pp. 779–787.
  36. Z. Gao, S. Bhattacharya, L. Zhang, R. S. Blum, A. Ribeiro, and B. M. Sadler, “Training robust graph neural networks with topology adaptive edge dropping,” arXiv preprint arXiv:2106.02892, 2021.
  37. T. Chen, Y. Sui, X. Chen, and et al., “A unified lottery ticket hypothesis for graph neural networks,” in ICML, 2021, pp. 1695–1706.
  38. T. Zhao, Y. Liu, L. Neves, and et al., “Data augmentation for graph neural networks,” in AAAI, 2021, pp. 11 015–11 023.
  39. K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in ICML.   PMLR, 2018, pp. 5453–5462.
  40. D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in AAAI, vol. 34, no. 04, 2020, pp. 3438–3445.
  41. S. Bai, F. Zhang, and P. H. Torr, “Hypergraph convolution and hypergraph attention,” Pattern Recognition, vol. 110, p. 107637, 2021.
  42. C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe, “Weisfeiler and leman go neural: Higher-order graph neural networks,” in AAAI, vol. 33, no. 01, 2019, pp. 4602–4609.
  43. D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in SIGKDD, 2018, pp. 2847–2856.
  44. D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networks via meta learning,” in ICLR, 2019.
  45. B. Wang and N. Z. Gong, “Attacking graph-based classification via manipulating the graph structure,” in CCS, 2019, pp. 2023–2040.
  46. J. Mu, B. Wang, Q. Li, K. Sun, M. Xu, and Z. Liu, “A hard label black-box adversarial attack against graph neural networks,” in CCS, 2021, pp. 108–125.
  47. J. Lee, I. Lee, and J. Kang, “Self-attention graph pooling,” in ICML.   PMLR, 2019, pp. 3734–3743.
  48. H. Gao and S. Ji, “Graph u-nets,” in ICML.   PMLR, 2019, pp. 2083–2092.
  49. K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin, “Topology attack and defense for graph neural networks: An optimization perspective,” arXiv preprint arXiv:1906.04214, 2019.
  50. H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang, “A restricted black-box adversarial framework towards attacking graph embedding models,” in AAAI, vol. 34, no. 04, 2020, pp. 3389–3396.
  51. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in ICML.   PMLR, 2019, pp. 6861–6871.
  52. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in WSDM, 2020, pp. 169–177.
  53. X. Zang, Y. Xie, J. Chen, and B. Yuan, “Graph universal adversarial attacks: A few bad actors ruin graph learning models,” in IJCAI, Z. Zhou, Ed.   ijcai.org, 2021, pp. 3328–3334.
  54. A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddings via graph poisoning,” in ICML, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 2019, pp. 695–704.
  55. V. Gupta and T. Chakraborty, “VIKING: adversarial attack on network embeddings via supervised network poisoning,” in PAKDD, ser. Lecture Notes in Computer Science, vol. 12714.   Springer, 2021, pp. 103–115.
  56. H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial attack on graph structured data,” in ICML.   PMLR, 2018, pp. 1115–1124.
  57. J. Frankle and M. Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural networks,” arXiv preprint arXiv:1803.03635, 2018.
  58. Q. Sun, J. Li, H. Yuan, X. Fu, H. Peng, C. Ji, Q. Li, and P. S. Yu, “Position-aware structure learning for graph topology-imbalance by relieving under-reaching and over-squashing,” in CIKM, M. A. Hasan and L. Xiong, Eds.   ACM, 2022, pp. 1848–1857.
  59. C. Cai and Y. Wang, “A note on over-smoothing for graph neural networks,” arXiv preprint arXiv:2006.13318, 2020.
  60. T. K. Rusch, M. M. Bronstein, and S. Mishra, “A survey on oversmoothing in graph neural networks,” arXiv preprint arXiv:2303.10993, 2023.
  61. X. Liu, J. Ding, W. Jin, H. Xu, Y. Ma, Z. Liu, and J. Tang, “Graph neural networks with adaptive residual,” Advances in Neural Information Processing Systems, vol. 34, pp. 9720–9733, 2021.
  62. N. Carlini and D. A. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy, SP 2017.   IEEE Computer Society, 2017, pp. 39–57.
  63. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
  64. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples for graph data: deep insights into attack and defense,” in IJCAI, 2019, pp. 4816–4823.
  65. M. McPherson, L. Smith-Lovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,” Annual review of sociology, vol. 27, no. 1, pp. 415–444, 2001.
  66. Z. Chen, L. Li, and J. Bruna, “Supervised community detection with line graph neural networks,” in ICLR, 2019.
  67. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, “Beyond homophily in graph neural networks: Current limitations and effective designs,” Advances in Neural Information Processing Systems, vol. 33, pp. 7793–7804, 2020.
  68. Y. Hou, J. Zhang, J. Cheng, K. Ma, R. T. Ma, H. Chen, and M.-C. Yang, “Measuring and improving the use of graph information in graph neural networks,” arXiv preprint arXiv:2206.13170, 2022.
  69. R. Tarjan, “Depth-first search and linear graph algorithms,” SIAM journal on computing, vol. 1, no. 2, pp. 146–160, 1972.
  70. W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in SIGKDD, 2020, pp. 66–74.
  71. J. Li, T. Xie, L. Chen, F. Xie, X. He, and Z. Zheng, “Adversarial attack on large scale graph,” IEEE TKDE, vol. 35, no. 1, pp. 82–95, 2021.
  72. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  73. M. Liu, H. Gao, and S. Ji, “Towards deeper graph neural networks,” in SIGKDD, 2020, pp. 338–348.
  74. J. Chen, T. Ma, and C. Xiao, “Fastgcn: fast learning with graph convolutional networks via importance sampling,” in ICLR, 2018.
  75. W. Jin, T. Derr, Y. Wang, Y. Ma, Z. Liu, and J. Tang, “Node similarity preserving graph convolutional networks,” in WSDM, 2021, pp. 148–156.
  76. X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” Advances in neural information processing systems, vol. 33, pp. 9263–9275, 2020.
  77. D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in SIGKDD, 2019, pp. 1399–1407.
  78. L. Chen, J. Li, Q. Peng, Y. Liu, Z. Zheng, and C. Yang, “Understanding structural vulnerability in graph convolutional networks,” arXiv preprint arXiv:2108.06280, 2021.
  79. H. Chen, L. Wang, Y. Lin, C.-C. M. Yeh, F. Wang, and H. Yang, “Structured graph convolutional networks with stochastic masks for recommender systems,” in SIGIR, 2021, pp. 614–623.
  80. V. Latora and M. Marchiori, “Efficient behavior of small-world networks,” Physical review letters, vol. 87, no. 19, p. 198701, 2001.
  81. P. W. Holland and S. Leinhardt, “Transitivity in structural models of small groups,” Comparative group studies, vol. 2, no. 2, pp. 107–124, 1971.
  82. M. Hay, C. Li, G. Miklau, and D. Jensen, “Accurate estimation of the degree distribution of private networks,” in ICDM.   IEEE, 2009, pp. 169–178.
  83. M. Boguná, R. Pastor-Satorras, and A. Vespignani, “Absence of epidemic threshold in scale-free networks with degree correlations,” Physical review letters, vol. 90, no. 2, p. 028701, 2003.
  84. P. Bonacich, “Some unique properties of eigenvector centrality,” Social networks, vol. 29, no. 4, pp. 555–564, 2007.
  85. J. Zhang and Y. Luo, “Degree centrality, betweenness centrality, and closeness centrality in social network,” in MSAM.   Atlantis press, 2017, pp. 300–303.
Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.