Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simplified PCNet with Robustness (2403.03676v1)

Published 6 Mar 2024 in cs.LG

Abstract: Graph Neural Networks (GNNs) have garnered significant attention for their success in learning the representation of homophilic or heterophilic graphs. However, they cannot generalize well to real-world graphs with different levels of homophily. In response, the Possion-Charlier Network (PCNet) \cite{li2024pc}, the previous work, allows graph representation to be learned from heterophily to homophily. Although PCNet alleviates the heterophily issue, there remain some challenges in further improving the efficacy and efficiency. In this paper, we simplify PCNet and enhance its robustness. We first extend the filter order to continuous values and reduce its parameters. Two variants with adaptive neighborhood sizes are implemented. Theoretical analysis shows our model's robustness to graph structure perturbations or adversarial attacks. We validate our approach through semi-supervised learning tasks on various datasets representing both homophilic and heterophilic graphs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. B. Li, E. Pan, and Z. Kang, “Pc-conv: Unifying homophily and heterophily with two-fold filtering,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2024.
  2. X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang, “Community preserving network embedding,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
  3. M. Welling and T. N. Kipf, “Semi-supervised classification with graph convolutional networks,” in J. International Conference on Learning Representations (ICLR 2017), 2016.
  4. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in International Conference on Machine Learning.   PMLR, 2019, pp. 6861–6871.
  5. Z. Wei, H. Zhao, Z. Li, X. Bu, Y. Chen, X. Zhang, Y. Lv, and F.-Y. Wang, “Stgsa: A novel spatial-temporal graph synchronous aggregation model for traffic prediction,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 1, pp. 226–238, 2023.
  6. J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” in International Conference on Learning Representations, 2019.
  7. S. Li, D. Kim, and Q. Wang, “Beyond low-pass filters: Adaptive feature propagation on graphs,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases.   Springer, 2021, pp. 450–465.
  8. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, “Beyond homophily in graph neural networks: Current limitations and effective designs,” Advances in neural information processing systems, vol. 33, pp. 7793–7804, 2020.
  9. R. Lei, Z. Wang, Y. Li, B. Ding, and Z. Wei, “Evennet: Ignoring odd-hop neighbors improves robustness of graph neural networks,” Advances in Neural Information Processing Systems, 2022.
  10. Q. Li, Z. Han, and X.-M. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  11. H. C. Nam, Y. S. Cha, and C. Park, “Global view for gcn: Why go deep when you can be shallow?” in J. International Conference on Learning Representations (ICLR), 2023.
  12. X. Li, R. Zhu, Y. Cheng, C. Shan, S. Luo, D. Li, and W. Qian, “Finding global homophily in graph neural networks when meeting heterophily,” in Interna tional Conference on Machine Learning.   PMLR, 2022.
  13. B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi, “Grand: Graph neural diffusion,” in International Conference on Machine Learning.   PMLR, 2021, pp. 1407–1418.
  14. M. Thorpe, T. M. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang, “Grand++: Graph neural diffusion with a source term,” in International Conference on Learning Representation (ICLR), 2022.
  15. H. Mao, Z. Chen, W. Jin, H. Han, Y. Ma, T. Zhao, N. Shah, and J. Tang, “Demystifying structural disparity in graph neural networks: Can one size fit all?” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  16. X. Li, R. Zhu, Y. Cheng, C. Shan, S. Luo, D. Li, and W. Qian, “Finding global homophily in graph neural networks when meeting heterophily,” in International Conference on Machine Learning.   PMLR, 2022, pp. 13 242–13 256.
  17. H. Kenlay, D. Thanou, and X. Dong, “Interpretable stability bounds for spectral graph filters,” in International conference on machine learning.   PMLR, 2021, pp. 5388–5397.
  18. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 169–177.
  19. W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020, pp. 66–74.
  20. X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” Advances in neural information processing systems, vol. 33, pp. 9263–9275, 2020.
  21. L. Yang, Z. Liu, and Y. Zhang, “Robust fuzzy adaptive yaw moment control of humanoid robot with unknown backlash nonlinearity,” IEEE/CAA Journal of Automatica Sinica, 2017.
  22. C. Ren, C. Zou, Z. Xiong, H. Yu, Z.-Y. Dong, and N. Dusit, “Achieving 500x acceleration for adversarial robustness verification of tree-based smart grid dynamic security assessment,” IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 3, pp. 800–802, 2024.
  23. M. Zhu, X. Wang, C. Shi, H. Ji, and P. Cui, “Interpreting and unifying graph neural networks with an optimization framework,” in The Web Conference, 2021, pp. 1215–1226.
  24. X. Xie, W. Chen, Z. Kang, and C. Peng, “Contrastive graph clustering with adaptive filter,” Expert Systems with Applications, vol. 219, p. 119645, 2023.
  25. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Advances in Neural Information Processing Systems, 2016, pp. 3837–3845.
  26. F. M. Bianchi, D. Grattarola, L. Livi, and C. Alippi, “Graph neural networks with convolutional ARMA filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  27. M. He, Z. Wei, H. Xu et al., “Bernnet: Learning arbitrary graph spectral filters via bernstein approximation,” Advances in Neural Information Processing Systems, vol. 34, pp. 14 239–14 251, 2021.
  28. M. He, Z. Wei, and J. Wen, “Convolutional neural networks on graphs with chebyshev approximation, revisited,” Advances in Neural Information Processing Systems, 2022.
  29. X. Wang and M. Zhang, “How powerful are spectral graph neural networks,” in International Conference on Machine Learning.   PMLR, 2022, pp. 23 341–23 362.
  30. Y. Guo and Z. Wei, “Graph neural networks with learnable and optimal polynomial bases,” in International conference on machine learning, vol. 202, 2023, pp. 12 077–12 097.
  31. S. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Harutyunyan, G. Ver Steeg, and A. Galstyan, “Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing,” in International Conference on Machine Learning.   PMLR, 2019, pp. 21–29.
  32. M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, “Simple and deep graph convolutional networks,” in International Conference on Machine Learning.   PMLR, 2020, pp. 1725–1735.
  33. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, “Beyond homophily in graph neural networks: Current limitations and effective designs,” Advances in Neural Information Processing Systems, vol. 33, pp. 7793–7804, 2020.
  34. S. Suresh, V. Budde, J. Neville, P. Li, and J. Ma, “Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021.
  35. E. Chien, J. Peng, P. Li, and O. Milenkovic, “Adaptive universal generalized pagerank graph neural network,” in International Conference on Learning Representations, 2021.
  36. Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra, “Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks,” in 2022 IEEE International Conference on Data Mining.   IEEE, 2022.
  37. S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup, “Is heterophily a real nightmare for graph neural networks to do node classification?” arXiv preprint arXiv:2109.05641, 2021.
  38. D. Lim, F. Hohne, X. Li, S. L. Huang, V. Gupta, O. Bhalerao, and S. N. Lim, “Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods,” Advances in Neural Information Processing Systems, vol. 34, pp. 20 887–20 902, 2021.
  39. E. Pan and Z. Kang, “Beyond homophily: Reconstructing structure for graph-agnostic clustering,” in Fortieth International Conference on Machine Learning, 2023.
  40. S. Abu-El-Haija, A. Kapoor, B. Perozzi, and J. Lee, “N-gcn: Multi-scale graph convolution for semi-supervised node classification,” in uncertainty in artificial intelligence.   PMLR, 2020, pp. 841–851.
  41. G. Li, M. Muller, A. Thabet, and B. Ghanem, “Deepgcns: Can gcns go as deep as cnns?” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9267–9276.
  42. K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in International conference on machine learning.   PMLR, 2018, pp. 5453–5462.
  43. G. Li, M. Müller, B. Ghanem, and V. Koltun, “Training graph neural networks with 1000 layers,” in International conference on machine learning.   PMLR, 2021, pp. 6437–6449.
  44. R. Fang, L. Wen, Z. Kang, and J. Liu, “Structure-preserving graph representation learning,” in 2022 IEEE International Conference on Data Mining (ICDM), 2022, pp. 927–932.
  45. W. Zhang, M. Yang, Z. Sheng, Y. Li, W. Ouyang, Y. Tao, Z. Yang, and B. Cui, “Node dependent local smoothing for scalable graph learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 20 321–20 332, 2021.
  46. J. Kroeker, “Wiener analysis of nonlinear systems using poisson-charlier crosscorrelation,” Biological Cybernetics, vol. 27, no. 4, pp. 221–227, 1977.
  47. M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, “Simple and deep graph convolutional networks,” in International Conference on Machine Learning, 2020, pp. 1725–1735.
  48. R. Levie, E. Isufi, and G. Kutyniok, “On the transferability of spectral graph filters,” in 2019 13th International conference on Sampling Theory and Applications (SampTA).   IEEE, 2019, pp. 1–5.
  49. N. T. Huang, S. Villar, C. Priebe, D. Zheng, C. Huang, L. Yang, and V. Braverman, “From local to global: Spectral-inspired graph neural networks,” in NeurIPS 2022 Workshop: New Frontiers in Graph Learning, 2022.
  50. Z. Yang, W. Cohen, and R. Salakhudinov, “Revisiting semi-supervised learning with graph embeddings,” in International Conference on Machine Learning.   PMLR, 2016, pp. 40–48.
  51. H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, “Geom-gcn: Geometric graph convolutional networks,” in International Conference on Learning Representations, 2020.
  52. Y. Yang, T. Liu, Y. Wang, J. Zhou, Q. Gan, Z. Wei, Z. Zhang, Z. Huang, and D. Wipf, “Graph neural networks inspired by classical iterative algorithms,” in International Conference on Machine Learning.   PMLR, 2021, pp. 11 773–11 783.
  53. K. Zhou, X. Huang, D. Zha, R. Chen, L. Li, S.-H. Choi, and X. Hu, “Dirichlet energy constrained learning for deep graph neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 21 834–21 846, 2021.
  54. M. Eliasof, E. Haber, and E. Treister, “Pde-gcn: Novel architectures for graph neural networks motivated by partial differential equations,” Advances in Neural Information Processing Systems, vol. 34, pp. 3836–3849, 2021.
  55. D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networks via meta learning,” in International Conference on Learning Representations (ICLR), 2019.
  56. K. Xu, H. Chen, S. Liu, P.-Y. Chen, T. W. Weng, M. Hong, and X. Lin, “Topology attack and defense for graph neural networks: An optimization perspective,” in International Joint Conference on Artificial Intelligence.   International Joint Conferences on Artificial Intelligence, 2019.
  57. X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in The world wide web conference, 2019, pp. 2022–2032.
  58. D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 1399–1407.
  59. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples for graph data: Deep insights into attack and defense,” in IJCAI, 2019.
Citations (2)

Summary

We haven't generated a summary for this paper yet.