Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Deep Neural Networks via Complex Network Theory: a Perspective (2404.11172v2)

Published 17 Apr 2024 in cs.LG and cs.AI

Abstract: Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures. However, classic works adapt CNT metrics that only permit a topological analysis as they do not account for the effect of the input data. In addition, CNT metrics have been applied to a limited range of architectures, mainly including Fully Connected neural networks. In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning. For the novel metrics, in addition to the existing ones, we provide a mathematical formalisation for Fully Connected, AutoEncoder, Convolutional and Recurrent neural networks, of which we vary the activation functions and the number of hidden layers. We show that these metrics differentiate DNNs based on the architecture, the number of hidden layers, and the activation function. Our contribution provides a method rooted in physics for interpreting DNNs that offers insights beyond the traditional input-output relationship and the CNT topological analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. C.M. Bishop. Neural networks for pattern recognition. Oxford University Press, USA, 1995.
  2. Complex networks: Structure and dynamics. Physics Reports, 424(4-5):175–308, February 2006.
  3. Functional modularity of background activities in normal and epileptic brain networks. Physical Review Letters, 104(11), March 2010.
  4. A topological analysis of the italian electric power grid. Physica A: Statistical Mechanics and its Applications, 338(1-2):92–97, July 2004.
  5. Visualizing higher-layer features of a deep network. Technical Report, Univeristé de Montréal, 01 2009.
  6. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681–3688, 2019.
  7. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
  8. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  9. Visualizing and understanding recurrent networks, 2015.
  10. Semi-supervised classification with graph convolutional networks, 2017.
  11. Cifar-10 (canadian institute for advanced research). URL http://www. cs. toronto. edu/kriz/cifar. html, 5(4):1, 2010.
  12. Characterizing learning dynamics of deep neural networks via complex networks. In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pages 344–351. IEEE, 2021.
  13. Y. Lecun and Y. Bengio. Convolutional networks for images, speech, and time-series. MIT Press, 1995.
  14. Methods for interpreting and understanding deep neural networks. Digital signal processing, 73:1–15, 2018.
  15. Layer-Wise Relevance Propagation: An Overview, pages 193–209. Springer International Publishing, Cham, 2019.
  16. Topological limits to the parallel processing capability of network architectures. Nature Physics, 17(5):646–651, May 2021.
  17. The network analysis of urban streets: A dual approach. Physica A: Statistical Mechanics and its Applications, 369(2):853–866, September 2006.
  18. The network analysis of urban streets: a primal approach. Environment and Planning B: planning and design, 33(5):705–725, 2006.
  19. The neural race reduction: Dynamics of abstraction in gated networks. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 19287–19309. PMLR, 17–23 Jul 2022.
  20. Improving deep neural network random initialization through neuronal rewiring, 2022.
  21. Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
  22. Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337, 2018.
  23. Deep learning systems as complex networks. Journal of Complex Networks, June 2019.
  24. Attention is all you need. In Advances in Neural Information Processing Systems 30: 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008, 2017.
  25. Emergence of network motifs in deep neural networks. Entropy, 22(2):204, February 2020.
  26. Visualizing and understanding convolutional networks. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014, pages 818–833. Springer International Publishing, 2014.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 7 likes.

Upgrade to Pro to view all of the tweets about this paper: