Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Neural Networks with Learnable and Optimal Polynomial Bases (2302.12432v2)

Published 24 Feb 2023 in cs.LG and cs.AI

Abstract: Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard's Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang & Zhang (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models. Our code is available at https://github.com/yuziGuo/FarOptBasis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp.  21–29. PMLR, 2019. URL http://proceedings.mlr.press/v97/abu-el-haija19a.html.
  2. Optuna: A next-generation hyperparameter optimization framework. In Teredesai, A., Kumar, V., Li, Y., Rosales, R., Terzi, E., and Karypis, G. (eds.), Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pp.  2623–2631. ACM, 2019. doi: 10.1145/3292500.3330701. URL https://doi.org/10.1145/3292500.3330701.
  3. Analyzing the expressive power of graph neural networks in a spectral perspective. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=-qh0M9XWxnv.
  4. Graph neural networks with convolutional arma filters. IEEE transactions on pattern analysis and machine intelligence, 44(7):3496–3507, 2021.
  5. Scalable graph neural networks via bidirectional propagation. CoRR, abs/2010.15421, 2020a. URL https://arxiv.org/abs/2010.15421.
  6. Simple and deep graph convolutional networks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp.  1725–1735. PMLR, 2020b. URL http://proceedings.mlr.press/v119/chen20v.html.
  7. Adaptive universal generalized pagerank graph neural network. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=n6jl7fLxrP.
  8. Convolutional neural networks on graphs with fast localized spectral filtering. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp.  3837–3845, 2016. URL https://proceedings.neurips.cc/paper/2016/hash/04df4d434d481c5bb723be1b6df1ee65-Abstract.html.
  9. Favard, J. Sur les polynomes de tchebicheff. CR Acad. Sci. Paris, 200(2052-2055):11, 1935.
  10. Sign: Scalable inception graph neural networks. In ICML 2020 Workshop on Graph Representation Learning and Beyond, 2020.
  11. Gautschi, W. Orthogonal polynomials: computation and approximation. OUP Oxford, 2004.
  12. Approximating the weight function for orthogonal polynomials on several intervals. Journal of Approximation Theory, 65(3):341–371, 1991. ISSN 0021-9045. doi: https://doi.org/10.1016/0021-9045(91)90096-S. URL https://www.sciencedirect.com/science/article/pii/002190459190096S.
  13. Wavelets on graphs via spectral graph theory. CoRR, abs/0912.3848, 2009. URL http://arxiv.org/abs/0912.3848.
  14. Bernnet: Learning arbitrary graph spectral filters via bernstein approximation. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.  14239–14251, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/76f1cfd7754a6e4fc3281bcccb3d0902-Abstract.html.
  15. Convolutional neural networks on graphs with chebyshev approximation, revisited. arXiv preprint arXiv:2202.03580, 2022.
  16. Open graph benchmark: Datasets for machine learning on graphs. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html.
  17. Edgenets: Edge varying graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7457–7473, 2021.
  18. Graph filters for signal processing and machine learning on graphs. arXiv preprint arXiv:2211.08854, 2022.
  19. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
  20. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=SJU4ayYgl.
  21. Predict then propagate: Graph neural networks meet personalized pagerank. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=H1gL-2A9Ym.
  22. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.  20887–20902, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ae816a80e4c1c56caa2eb4e1819cbb2f-Abstract.html.
  23. Geom-gcn: Geometric graph convolutional networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=S1e2agrFvS.
  24. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  25. Comparative study of skin color detection and segmentation in hsv and ycbcr color space. Procedia Computer Science, 57:41–48, 2015.
  26. Vertex-frequency analysis on graphs. CoRR, abs/1307.5708, 2013. URL http://arxiv.org/abs/1307.5708.
  27. Simon, B. Orthogonal polynomials on the unit circle, part 1: Classical theory, ams colloq, 2005.
  28. Simon, B. Spectral theory of orthogonal polynomials. In XVIIth International Congress on Mathematical Physics, pp. 217–228. World Scientific, 2014.
  29. Improving graph attention networks with large margin-based constraints. CoRR, abs/1910.11945, 2019. URL http://arxiv.org/abs/1910.11945.
  30. How powerful are spectral graph neural networks. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp.  23341–23362. PMLR, 2022. URL https://proceedings.mlr.press/v162/wang22am.html.
  31. Simplifying graph convolutional networks. CoRR, abs/1902.07153, 2019. URL http://arxiv.org/abs/1902.07153.
  32. Optimization of graph neural networks: Implicit acceleration by skip connections and more depth. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139, pp. 11592–11602, 2021.
  33. Revisiting semi-supervised learning with graph embeddings. In Balcan, M. and Weinberger, K. Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pp.  40–48. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/yanga16.html.
  34. Node dependent local smoothing for scalable graph learning. Advances in Neural Information Processing Systems, 34:20321–20332, 2021.
Citations (21)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub