Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Equipping SBMs with RBMs: An Explainable Approach for Analysis of Networks with Covariates (1911.04172v2)

Published 11 Nov 2019 in cs.SI and cs.LG

Abstract: Networks with node covariates offer two advantages to community detection methods, namely, (i) exploit covariates to improve the quality of communities, and more importantly, (ii) explain the discovered communities by identifying the relative importance of different covariates in them. Recent methods have almost exclusively focused on the first point above. However, the quantitative improvements offered by them are often due to complex black-box models like deep neural networks at the expense of explainability. Approaches that focus on the second point are either domain-specific or have poor performance in practice. This paper proposes explainable, domain-independent statistical models for networks with node covariates that additionally offer good quantitative performance. Our models combine the strengths of Stochastic Block Models and Restricted Boltzmann Machines to provide interpretable insights about the communities. They support both pure and mixed community memberships. Besides providing explainability, our approach's main strength is that it does not explicitly assume a causal direction between community memberships and node covariates, making it applicable in diverse domains. We derive efficient inference procedures for our models, which can, in some cases, run in linear time in the number of nodes and edges. Experiments on several synthetic and real-world networks demonstrate that our models achieve close to state-of-the-art performance on community detection and link prediction tasks while also providing explanations for the discovered communities.

Citations (1)

Summary

We haven't generated a summary for this paper yet.