Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks (2405.06884v2)

Published 11 May 2024 in cs.LG

Abstract: Networked dynamical systems are widely used as formal models of real-world cascading phenomena, such as the spread of diseases and information. Prior research has addressed the problem of learning the behavior of an unknown dynamical system when the underlying network has a single layer. In this work, we study the learnability of dynamical systems over multilayer networks, which are more realistic and challenging. First, we present an efficient PAC learning algorithm with provable guarantees to show that the learner only requires a small number of training examples to infer an unknown system. We further provide a tight analysis of the Natarajan dimension which measures the model complexity. Asymptotically, our bound on the Nararajan dimension is tight for almost all multilayer graphs. The techniques and insights from our work provide the theoretical foundations for future investigations of learning problems for multilayer dynamical systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. Trace complexity of network inference. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 491–499. ACM, 2013.
  2. PAC learnability of node functions in networked dynamical systems. In International Conference on Machine Learning, pages 82–91. PMLR, 2019.
  3. The probabilistic method. John Wiley & Sons, 2016.
  4. Learning submodular functions. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 793–802, 2011.
  5. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. The Journal of Machine Learning Research, 20(1):2285–2301, 2019.
  6. Networks beyond pairwise interactions: structure and dynamics. Physics Reports, 874:1–92, 2020.
  7. The structure and dynamics of multilayer networks. Physics reports, 544(1):1–122, 2014.
  8. Mathematical results on scale-free random graphs. Handbook of graphs and networks: from the genome to the internet, pages 1–34, 2003.
  9. Multiplexity-facilitated cascades in networks. Physical Review E, 85(4):045102, 2012.
  10. Network inference and influence maximization from samples. In International Conference on Machine Learning, pages 1707–1716. PMLR, 2021.
  11. Learning mixtures of linear dynamical systems. International Conference on Machine Learning, 162:3507–3557, 2022.
  12. The diffusion of an innovation among physicians. Sociometry, 20(4):253–270, 1957.
  13. Learning opinions in social networks. In International Conference on Machine Learning, pages 2122–2132. PMLR, 2020.
  14. Learning influence adoption in heterogeneous networks. In Proc. AAAI, pages 6411–6419. AAAI, 2022.
  15. Asymptotics for cliques in scale-free random graphs. arXiv preprint arXiv:2008.11557, 2020.
  16. Estimating diffusion network structures: Recovery conditions, sample complexity & soft-thresholding algorithm. In International conference on machine learning, pages 793–801. PMLR, 2014.
  17. Multiclass learnability and the erm principle. In Proceedings of the 24th Annual Conference on Learning Theory, pages 207–232. JMLR Workshop and Conference Proceedings, 2011.
  18. Diffusion source identification on networks with statistical confidence. In International Conference on Machine Learning, pages 2500–2509. PMLR, 2021.
  19. The physics of spreading processes in multilayer networks. Nature Physics, 12(10):901–906, 2016.
  20. Mathematical formulation of multilayer networks. Physical Review X, 3(4):041022, 2013.
  21. The spreading of misinformation online. Proceedings of the national academy of Sciences, 113(3):554–559, 2016.
  22. Influence function learning in information diffusion networks. In International Conference on Machine Learning, pages 2016–2024. PMLR, 2014.
  23. Learning networks of heterogeneous influence. Advances in neural information processing systems, 25, 2012.
  24. Robin IM Dunbar. Coevolution of neocortical size, group size and language in humans. Behavioral and brain sciences, 16(4):681–694, 1993.
  25. On the chromatic index of almost all graphs. Journal of combinatorial theory, series B, 23(2-3):255–257, 1977.
  26. Inferring networks of diffusion and influence. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1019–1028. ACM, 2010.
  27. The dynamics of protest recruitment through an online network. Scientific Reports, 1:7 pages, 2011.
  28. Mark Granovetter. Threshold models of collective behavior. American journal of sociology, 83(6):1420–1443, 1978.
  29. Multilayer networks: aspects, implementations, and application in biomedicine. Big Data Analytics, 5(1):1–18, 2020.
  30. David Haussler. Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial intelligence, 36(2):177–221, 1988.
  31. Network diffusions via neural mean-field dynamics. Advances in Neural Information Processing Systems, 33:2171–2183, 2020.
  32. Learning influence functions from incomplete observations. Advances in Neural Information Processing Systems, 29, 2016.
  33. On PAC learning algorithms for rich boolean function classes. Theoretical Computer Science, 384(1):66–76, 2007.
  34. Statistical inference of diffusion networks. IEEE Transactions on Knowledge and Data Engineering, 33(2):742–753, 2019.
  35. Mathematical and computational modeling in complex biological systems. BioMed research international, 2017, 2017.
  36. Learning diffusion using hyperparameters. In International Conference on Machine Learning, pages 2420–2428. PMLR, 2018.
  37. Random Boolean network models and the yeast transcriptional network. Proc. National Academy of Sciences (PNAS), 100(25):14796–14799, Dec. 2003.
  38. Multilayer networks. Journal of complex networks, 2(3):203–271, 2014.
  39. R. Laubenbacher and B. Stigler. A computational algebra approach to the reverse engineering of gene regulatory networks. J. Theoretical Biology, 229:523–537, 2004.
  40. Threshold cascades with response heterogeneity in multiplex networks. Physical Review E, 90(6):062816, 2014.
  41. Online influence maximization under linear threshold model. Advances in Neural Information Processing Systems, 33:1192–1204, 2020.
  42. Andrey Lokhov. Reconstructing parameters of spreading models from partial observations. Advances in Neural Information Processing Systems, 29, 2016.
  43. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates. Journal of The Royal Society Interface, 11(98):2014.0409, 2014.
  44. Combinatorial analysis of multiple networks. arXiv preprint arXiv:1303.4986, 2013.
  45. On the convexity of latent social network inference. Advances in neural information processing systems, 23, 2010.
  46. Learnability of influence in networks. In Advances in Neural Information Processing Systems, pages 3186–3194, 2015.
  47. Balas K Natarajan. On learning sets and functions. Machine Learning, 4(1):67–97, 1989.
  48. Mark Newman. Networks. Oxford university press, 2018.
  49. Characterizing interactions in online social networks during exceptional events. Frontiers in Physics, 3:59, 2015.
  50. Epidemic processes in complex networks. Reviews of modern physics, 87(3):925, 2015.
  51. Inferring graphs from cascades: A sparse recovery framework. In International Conference on Machine Learning, pages 977–986. PMLR, 2015.
  52. Learning to optimize combinatorial functions. In International Conference on Machine Learning, pages 4374–4383. PMLR, 2018.
  53. Efficiently learning the topology and behavior of a networked dynamical system via active queries. In International Conference on Machine Learning, pages 18796–18808. PMLR, 2022.
  54. Spreading processes in multilayer networks. IEEE Transactions on Network Science and Engineering, 2(2):65–83, 2015.
  55. Thomas C Schelling. Micromotives and macrobehavior. WW Norton & Company, 2006.
  56. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, 2014.
  57. Efficient modeling, simulation and coarse-graining of biological complexity with nfsim. Nature methods, 8(2):177–183, 2011.
  58. S. Soundarajan and J. E. Hopcroft. Recovering social networks from contagion information. In Proceedings of the 7th Annual Conference on Theory and Models of Computation, pages 419–430. Springer, 2010.
  59. Biogrid: a general repository for interaction datasets. Nucleic acids research, 34(suppl_1):D535–D539, 2006.
  60. Model for rumor spreading over networks. Physical Review E, 81(5):056102, 2010.
  61. Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
  62. Measuring the VC-dimension of a learning machine. Neural computation, 6(5):851–876, 1994.
  63. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of complexity, pages 11–30. Springer, 2015.
  64. Duncan J Watts. A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences, 99(9):5766–5771, 2002.
  65. Prediction-centric learning of independent cascade dynamics from partial observations. In International Conference on Machine Learning, pages 11182–11192. PMLR, 2021.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets