Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A rank decomposition for the topological classification of neural representations (2404.19710v3)

Published 30 Apr 2024 in cs.LG, math.AT, and q-bio.NC

Abstract: Neural networks can be thought of as applying a transformation to an input dataset. The way in which they change the topology of such a dataset often holds practical significance for many tasks, particularly those demanding non-homeomorphic mappings for optimal solutions, such as classification problems. In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps, whose rank can be used to pinpoint regions in the input space that undergo non-homeomorphic transformations, leading to alterations in the topological structure of the input dataset. Our approach enables us to make use of the relative homology sequence, with which one can study the homology groups of the quotient of a manifold $\mathcal{M}$ and a subset $A$, assuming some minimal properties on these spaces. As a proof of principle, we empirically investigate the presence of low-rank (topology-changing) affine maps as a function of network width and mean weight. We show that in randomly initialized narrow networks, there will be regions in which the (co)homology groups of a data manifold can change. As the width increases, the homology groups of the input manifold become more likely to be preserved. We end this part of our work by constructing highly non-random wide networks that do not have this property and relating this non-random regime to Dale's principle, which is a defining characteristic of biological neural networks. Finally, we study simple feedforward networks trained on MNIST, as well as on toy classification and regression tasks, and show that networks manipulate the topology of data differently depending on the continuity of the task they are trained on.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Sparse sign-consistent johnson–lindenstrauss matrices: Compression with neuroscience-based constraints. Proceedings of the National Academy of Sciences, 111(47):16872–16876, 2014.
  2. Understanding deep neural networks with rectified linear units. arXiv preprint arXiv:1611.01491, 2016.
  3. The geometry of deep networks: Power diagram subdivision. Advances in Neural Information Processing Systems, 32, 2019.
  4. Relative persistent homology. Discrete & Computational Geometry, 68(4):949–963, 2022.
  5. Efficient codes and balanced networks. Nature neuroscience, 19(3):375–382, 2016.
  6. On transversality of bent hyperplane arrangements and the topological expressiveness of relu neural networks. SIAM Journal on Applied Algebra and Geometry, 6(2):216–242, 2022.
  7. Functional dimension of feedforward relu neural networks. arXiv preprint arXiv:2209.04036, 2022.
  8. Deep relu networks have surprisingly few activation patterns. Advances in neural information processing systems, 32, 2019.
  9. Allen Hatcher. Algebraic topology. Cambridge University Press, 2002.
  10. Relu deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018.
  11. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
  12. When deep learning meets polyhedral theory: A survey. arXiv preprint arXiv:2305.00241, 2023.
  13. Eugene M Izhikevich. Dynamical systems in neuroscience. MIT press, 2007.
  14. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  15. Relu neural networks, polyhedral decompositions, and persistent homology. In Topological, Algebraic and Geometric Learning Workshops 2023, pages 455–468. PMLR, 2023.
  16. Marissa Masden. Algorithmic determination of the combinatorial structure of the linear regions of relu neural networks. arXiv preprint arXiv:2207.07696, 2022.
  17. On the number of linear regions of deep neural networks. Advances in neural information processing systems, 27, 2014.
  18. Topology of deep neural networks. Journal of Machine Learning Research, 21(184):1–40, 2020.
  19. On the number of response regions of deep feed forward networks with piece-wise linear activations. arXiv preprint arXiv:1312.6098, 2013.
  20. Topological trajectory clustering with relative persistent homology. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 16–23. IEEE, 2016.
  21. The intrinsic dimension of images and its impact on learning. arXiv preprint arXiv:2104.08894, 2021.
  22. On the expressive power of deep neural networks. In international conference on machine learning, pages 2847–2854. PMLR, 2017.
  23. On the spectral bias of neural networks. In International conference on machine learning, pages 5301–5310. PMLR, 2019.
  24. Reverse-engineering deep relu networks. In International conference on machine learning, pages 8178–8187. PMLR, 2020.
  25. Bounding and counting linear regions of deep neural networks. In International conference on machine learning, pages 4558–4566. PMLR, 2018.
  26. Richard P Stanley et al. An introduction to hyperplane arrangements. Geometric combinatorics, 13(389-496):24, 2004.
  27. S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. 2013.
  28. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724–1726, 1996.
  29. James W Vick. Homology theory: an introduction to algebraic topology, volume 145. Springer Science & Business Media, 2012.

Summary

We haven't generated a summary for this paper yet.