Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An optimization-based equilibrium measure describes non-equilibrium steady state dynamics: application to edge of chaos (2401.10009v2)

Published 18 Jan 2024 in q-bio.NC, cond-mat.stat-mech, and cs.NE

Abstract: Understanding neural dynamics is a central topic in machine learning, non-linear physics and neuroscience. However, the dynamics is non-linear, stochastic and particularly non-gradient, i.e., the driving force can not be written as gradient of a potential. These features make analytic studies very challenging. The common tool is the path integral approach or dynamical mean-field theory, but the drawback is that one has to solve the integro-differential or dynamical mean-field equations, which is computationally expensive and has no closed form solutions in general. From the aspect of associated Fokker-Planck equation, the steady state solution is generally unknown. Here, we treat searching for the steady states as an optimization problem, and construct an approximate potential related to the speed of the dynamics, and find that searching for the ground state of this potential is equivalent to running an approximate stochastic gradient dynamics or Langevin dynamics. Only in the zero temperature limit, the distribution of the original steady states can be achieved. The resultant stationary state of the dynamics follows exactly the canonical Boltzmann measure. Within this framework, the quenched disorder intrinsic in the neural networks can be averaged out by applying the replica method, which leads naturally to order parameters for the non-equilibrium steady states. Our theory reproduces the well-known result of edge-of-chaos, and further the order parameters characterizing the continuous transition are derived, and the order parameters are explained as fluctuations and responses of the steady states. Our method thus opens the door to analytically study the steady state landscape of the deterministic or stochastic high dimensional dynamics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Hannes Risken. The Fokker-Planck Equation: Methods of Solution and Applications. Springer-Verlag Berlin, Berlin, 1996.
  2. Steven H Strogatz. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press, 2015.
  3. Dynamical principles in neuroscience. Rev. Mod. Phys., 78:1213–1265, 2006.
  4. Jorge Kurchan. Fluctuation theorem for stochastic dynamics. Journal of Physics A: Mathematical and General, 31(16):3719, 1998.
  5. Fluctuation theorems for non-linear generalized langevin systems. Journal of Statistical Mechanics: Theory and Experiment, 2007(10):P10010, 2007.
  6. Udo Seifert. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on Progress in Physics, 75(12):126001, 2012.
  7. Statistical dynamics of classical systems. Phys. Rev. A, 8:423–437, 1973.
  8. Hans-Karl Janssen. On a lagrangean for classical field dynamics and renormalization group calculations of dynamical critical properties. Zeitschrift für Physik B Condensed Matter, 23(4):377–380, 1976.
  9. C. De Dominicis. Dynamics as a substitute for replicas in systems with quenched random impurities. Phys. Rev. B, 18:4913–4919, 1978.
  10. Hans-Jürgen Sommers. Path-integral approach to ising spin-glass dynamics. Phys. Rev. Lett., 58:1268–1271, 1987.
  11. Steady-state thermodynamics of langevin systems. Phys. Rev. Lett., 86:3463–3466, 2001.
  12. A. Crisanti and H. Sompolinsky. Dynamics of spin systems with randomly asymmetric bonds: Langevin dynamics and a spherical model. Phys. Rev. A, 36:4922–4939, 1987.
  13. Path integral methods for stochastic differential equations. The Journal of Mathematical Neuroscience (JMN), 5(1):8, 2015.
  14. Path integral methods for the dynamics of stochastic and disordered systems. Journal of Physics A: Mathematical and Theoretical, 50(3):033001, 2017.
  15. Introduction to dynamical mean-field theory of randomly connected neural networks with bidirectionally correlated couplings. arXiv:2305.08459, 2023.
  16. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation, 25:626–649, 2013.
  17. Haiping Huang. Statistical Mechanics of Neural Networks. Springer, Singapore, 2022.
  18. Chaos in random neural networks. Phys. Rev. Lett., 61:259–262, 1988.
  19. Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks. Phys. Rev. E, 97:062314, 2018.
  20. Spin Glass Theory and Beyond. World Scientific, Singapore, 1987.
  21. Hugo Touchette. The large deviation approach to statistical mechanics. Physics Reports, 478(1-3):1–69, 2009.
  22. On the volterra and other nonlinear models of interacting populations. Rev. Mod. Phys., 43:231–276, 1971.
  23. Robert May. Will a large complex system be stable? Nature, 238(5364):413–414, 1972.
  24. Nonlinear analogue of the may-wigner instability transition. Proceedings of the National Academy of Sciences, 113:6827–6832, 2016.
  25. Counting equilibria in a random non-gradient dynamics with heterogeneous relaxation rates. Journal of Physics A: Mathematical and Theoretical, 55(14):144001, 2022.
  26. Pierpaolo Vivo. Random linear systems with quadratic constraints: from random matrix theory to replicas and back. arXiv:2401.03209, 2024.
  27. Bayesian learning via stochastic gradient langevin dynamics. In International Conference on Machine Learning, 2011.
  28. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, United Kingdom, 2014.
  29. Optimal sequence memory in driven random networks. Phys. Rev. X, 8:041029, Nov 2018.
  30. M. Helias and D. Dahmen. Statistical field theory for neural networks. Springer, Berlin, 2020.
  31. Dimension of activity in random neural networks. Phys. Rev. Lett., 131:118401, 2023.
  32. Spectrum of non-hermitian deep-hebbian neural networks. Phys. Rev. Res., 5:013090, 2023.
  33. Shun-Ichi Amari. Characteristics of random nets of analog neuron-like elements. IEEE Transactions on Systems, Man, and Cybernetics, SMC-2(5):643–657, 1972.
  34. Topological and dynamical complexity of random neural networks. Phys. Rev. Lett., 110:118101, 2013.
  35. Spectrum of large random asymmetric matrices. Phys. Rev. Lett., 60:1895–1898, 1988.
  36. Nonequilibrium landscape theory of neural networks. Proceedings of the National Academy of Sciences of the United States of America, 110(45):E4185–E4194, 2013.
  37. ACC Coolen. Statistical mechanics of recurrent neural networks i—statics. In Handbook of biological physics, volume 4, pages 553–618. Elsevier, 2001.
  38. Chris G. Langton. Computation at the edge of chaos: Phase transitions and emergent computation. Physica D: Nonlinear Phenomena, 42(1):12–37, 1990.
  39. Real-time computation at the edge of chaos in recurrent neural networks. Neural computation, 16(7):1413–1436, 2004.
  40. Consciousness is supported by near-critical slow cortical electrodynamics. Proceedings of the National Academy of Sciences, 119(7):e2024455119, 2022.
  41. G. Parisi and N. Sourlas. Supersymmetric field theories and stochastic differential equations. Nuclear Physics B, 206(2):321–332, 1982.
  42. Origin of the computational hardness for learning with binary synapses. Phys. Rev. E, 90:052813, 2014.
  43. Haiping Huang. Eight challenges in developing theory of intelligence. arXiv:2306.11232, 2023.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com