Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Inference in Contextual Stochastic Block Models (2306.07948v2)

Published 6 Jun 2023 in cs.SI and cs.LG

Abstract: The contextual stochastic block model (cSBM) was proposed for unsupervised community detection on attributed graphs where both the graph and the high-dimensional node information correlate with node labels. In the context of machine learning on graphs, the cSBM has been widely used as a synthetic dataset for evaluating the performance of graph-neural networks (GNNs) for semi-supervised node classification. We consider a probabilistic Bayes-optimal formulation of the inference problem and we derive a belief-propagation-based algorithm for the semi-supervised cSBM; we conjecture it is optimal in the considered setting and we provide its implementation. We show that there can be a considerable gap between the accuracy reached by this algorithm and the performance of the GNN architectures proposed in the literature. This suggests that the cSBM, along with the comparison to the performance of the optimal algorithm, readily accessible via our implementation, can be instrumental in the development of more performant GNN architectures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Emmanuel Abbé. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446–6531, 2017. arxiv:1703.10146.
  2. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. In Proceedings of the 38th International Conference on Machine Learning, 2021. arxiv:2102.06966.
  3. AMP-inspired deep networks for sparse linear inverse problems. IEEE Transactions on Signal Processing, 65(16):4293–4308, 2017. arxiv:1612.01183.
  4. Supervised community detection with line graph neural networks. In International conference on learning representations, 2020. arXiv:1705.08415.
  5. Adaptative universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2021. arxiv:2006.07988.
  6. On provable benefits of depth in training graph convolutional networks. In 35th Conference on Neural Information Processing Systems, 2021. arxiv:2110.15174.
  7. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E, 84, 2011. arxiv:1109.3041.
  8. Contextual stochastic block models. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31, 2018. arxiv:1807.09596.
  9. Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula. Advances in Neural Information Processing Systems, 29, 2016. arxiv:1606.04142.
  10. Santo Fortunato. Community detection in graphs. Physics reports, 486(3-5):75–174, 2010. arXiv:0906.0612.
  11. p-Laplacian based graph neural networks. In Proceedings of the 39th International Conference on Machine Learning, 2021. arxiv:2111.07337.
  12. Learnable graph convolutional attention networks. In International Conference on Learning Representations, 2023. arxiv:2211.11853.
  13. On classification thresholds for graph attention with edge features. 2022. arxiv:2210.10014.
  14. EvenNet: Ignoring odd-hop neighbors improves robustness of graph neural networks. In 36th Conference on Neural Information Processing Systems, 2022. arxiv:2205.13892.
  15. Phase transitions and optimal algorithms in high-dimensional gaussian mixture clustering. In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 601–608. IEEE, 2016. arxiv:1610.02918.
  16. Constrained low-rank matrix estimation: Phase transitions, approximate message passing and applications. Journal of Statistical Mechanics: Theory and Experiment, 2017(7):073403, 2017. arxiv:1701.00858.
  17. Contextual stochastic block model: Sharp thresholds and contiguity. 2020. arXiv:2011.09841.
  18. Information, physics, and computation. Oxford University Press, 2009.
  19. Statistical mechanics of generalization in graph convolution networks. 2022. arXiv:2212.13069.
  20. Understanding non-linearity in graph neural networks from the Bayesian-inference perspective. In Conference on Neural Information Processing Systems, 2022. arxiv:2207.11311.
  21. A non-asymptotic analysis of oversmoothing in graph neural networks. In International Conference on Learning Representations, 2023. arxiv:2212.10701.
  22. Covariate regularized community detection in sparse graphs. Journal of the American Statistical Association, 116(534):734–745, 2021. arxiv:1607.02675.
  23. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8(236-239):0018–9448, 2003.
  24. Design space for graph neural networks. In 34th Conference on Neural Information Processing Systems, 2020. arxiv:2011.08843.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com