The Kinetic Limit of Balanced Neural Networks (2505.18481v1)
Abstract: The theory of Balanced Neural Networks is a very popular explanation for the high degree of variability and stochasticity in the brain's activity. Roughly speaking, it entails that typical neurons receive many excitatory and inhibitory inputs. The network-wide mean inputs cancel, and one is left with the stochastic fluctuations about the mean. In this paper we determine kinetic equations that describe the population density. The intrinsic dynamics is nonlinear, with multiplicative noise perturbing the state of each neuron. The equations have a spatial dimension, such that the strength-of-connection between neurons is a function of their spatial position. Our method of proof is to decompose the state variables into (i) the network-wide average activity, and (ii) fluctuations about this mean. In the limit, we determine two coupled limiting equations. The requirement that the system be balanced yields implicit equations for the evolution of the average activity. In the large n limit, the population density of the fluctuations evolves according to a Fokker-Planck equation. If one makes an additional assumption that the intrinsic dynamics is linear and the noise is not multiplicative, then one obtains a spatially-distributed neural field equation.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.