Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Laplacian Learning with Exponential Family Noise (2306.08201v2)

Published 14 Jun 2023 in cs.LG and eess.SP

Abstract: Graph signal processing (GSP) is a prominent framework for analyzing signals on non-Euclidean domains. The graph Fourier transform (GFT) uses the combinatorial graph Laplacian matrix to reveal the spectral decomposition of signals in the graph frequency domain. However, a common challenge in applying GSP methods is that in many scenarios the underlying graph of a system is unknown. A solution in such cases is to construct the unobserved graph from available data, which is commonly referred to as graph or network inference. Although different graph inference methods exist, these are restricted to learning from either smooth graph signals or simple additive Gaussian noise. Other types of noisy data, such as discrete counts or binary digits, are rather common in real-world applications, yet are underexplored in graph inference. In this paper, we propose a versatile graph inference framework for learning from graph signals corrupted by exponential family noise. Our framework generalizes previous methods from continuous smooth graph signals to various data types. We propose an alternating algorithm that jointly estimates the graph Laplacian and the unobserved smooth representation from the noisy signals. We also extend our approach to a variational form to account for the inherent stochasticity of the latent smooth representation. Finally, since real-world graph signals are frequently non-independent and temporally correlated, we further adapt our original setting to a time-vertex formulation. We demonstrate on synthetic and real-world data that our new algorithms outperform competing Laplacian estimation methods that suffer from noise model mismatch.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. The multivariate Poisson-log normal distribution. Biometrika, 76(4):643–653, 1989.
  2. Logistic-normal distributions: Some properties and uses. Biometrika, 67(2):261–272, 1980.
  3. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. JMLR, 9:485–516, 2008.
  4. Learning microbial interaction networks from metagenomic count data. Journal of Computational Biology, 23(6):526–535, 2016.
  5. Learning graphs from smooth and graph-stationary signals with hidden variables. IEEE Transactions on Signal and Information Processing over Networks, 8:273–287, 2022.
  6. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature reviews neuroscience, 10(3):186–198, 2009.
  7. Variational inference for sparse network reconstruction from count data. In ICML, pp.  1162–1171. PMLR, 2019.
  8. Cipra, B. A. An introduction to the Ising model. The American Mathematical Monthly, 94(10):937–959, 1987.
  9. Learning Laplacian matrix in smooth graph signal representations. IEEE Trans. Signal Process., 64(23):6160–6173, 2016.
  10. Learning graphs from data: A signal representation perspective. IEEE Signal Process. Mag., 36(3):44–63, 2019.
  11. Graph learning from data under Laplacian and structural constraints. IEEE J. Sel. Topics Signal Process., 11(6):825–841, 2017.
  12. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008.
  13. A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. IEEE Trans. Signal Process., 66(3):817–829, 2017.
  14. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
  15. A spectral graph regression model for learning brain connectivity of Alzheimer’s disease. PloS One, 10(5):e0128136, 2015.
  16. Learning laplacian matrix from graph signals with sparse spectral representation. The Journal of Machine Learning Research, 22(1):8766–8812, 2021.
  17. Kalofolias, V. How to learn a graph from smooth signals. In AISTATS, pp.  920–929. PMLR, 2016.
  18. Large scale graph learning from smooth signals. arXiv preprint arXiv:1710.05654, 2017.
  19. The discovery of structural form. Proceedings of the National Academy of Sciences, 105(31):10687–10692, 2008.
  20. A unified framework for structured graph learning via spectral constraints. JMLR, 21(22):1–60, 2020.
  21. Sparse and compositionally robust inference of microbial ecological networks. PLoS Computational Biology, 11(5):e1004226, 2015.
  22. Discovering structure by learning sparse graphs. In Proceedings of the 32nd Annual Conference of the CSS, 2010.
  23. Is the boston subway a small-world network? Physica A: Statistical Mechanics and its Applications, 314(1-4):109–113, 2002.
  24. Graph learning based on spatiotemporal smoothness for time-varying graph signal. IEEE Access, 7:62372–62386, 2019.
  25. Graph learning under sparsity priors. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  6523–6527, 2017.
  26. Connecting the dots: Identifying network structure via graph signal processing. IEEE Signal Process. Mag., 36(3):16–43, 2019.
  27. Joint inference of multiple graphs from matrix polynomials. J. Mach. Learn. Res., 23:1–35, 2020.
  28. Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106(5):808–828, 2018.
  29. Learning the network structure of heterogeneous data via pairwise exponential Markov random fields. In AISTATS, pp.  1302–1310. PMLR, 2017.
  30. Characterization and inference of graph diffusion processes from observations of stationary signals. IEEE Trans. Signal Inf. Process, 4(3):481–496, 2017.
  31. Neural latents benchmark’21: Evaluating latent variable models of neural population activity. arXiv preprint arXiv:2109.04463, 2021.
  32. Gspbox: A toolbox for signal processing on graphs. arXiv preprint arXiv:1408.5781, 2014.
  33. An introduction to exponential random graph (p*) models for social networks. Social networks, 29(2):173–191, 2007.
  34. Gaussian Markov random fields: theory and applications. Chapman and Hall/CRC, 2005.
  35. Accelerated graph learning from smooth signals. IEEE Signal Processing Letters, 28:2192–2196, 2021.
  36. Graph topology inference based on sparsifying transform learning. IEEE Transactions on Signal Processing, 67(7):1712–1727, 2019.
  37. Network topology inference from spectral templates. IEEE Trans. Signal Inf. Process, 3(3):467–483, 2017.
  38. Sensors, B. W. S.-A. City of chicago— data portal.(nd). retrieved april 25, 2017, 2017.
  39. Identifying the topology of undirected networks from diffused non-stationary graph signals. IEEE Open Journal of Signal Processing, 2:171–189, 2021.
  40. Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov random fields. Linear Algebra and its Applications, 473:145–179, 2015.
  41. Learning heat diffusion graphs. IEEE Trans. Signal Inf. Process, 3(3):484–499, 2017.
  42. Trinajstic, N. Chemical graph theory. CRC press, 2018.
  43. Graphical models via generalized linear models. Advances in neural information processing systems, 25, 2012.
  44. Graphical models via univariate exponential family distributions. JMLR, 16(1):3813–3847, 2015.
  45. Advances to Bayesian network inference for generating causal networks from observational biological data. Bioinformatics, 20(18):3594–3603, 2004.
  46. Exploiting causal independence in Bayesian network inference. JAIR, 5:301–328, 1996.
Citations (1)

Summary

We haven't generated a summary for this paper yet.