Local E-Prop: Efficient Online Learning
- Local E-Prop is a biologically plausible learning rule that employs per-synapse eligibility traces and local error signals to approximate gradient descent in RNNs and SNNs.
- It utilizes a three-factor update mechanism combining pre-synaptic activity, post-synaptic state, and broadcast learning signals to ensure strict locality and causal computation.
- Event-driven implementations reduce computational overhead in sparse networks, enabling scalable and energy-efficient learning on neuromorphic hardware.
Local E-Prop is a strictly local, causal, and online learning rule for recurrent neural networks (RNNs), most notably spiking neural networks (SNNs), that implements temporal credit assignment using per-synapse eligibility traces and locally available learning signals. Distinguished by its biological plausibility, computational efficiency, and scalability, local E-Prop approximates the exact gradient of the Backpropagation Through Time (BPTT) algorithm while eliminating its non-local and non-causal dependencies. Recent developments extend E-Prop to deep, multi-layer and large-scale sparse networks, and provide event-driven implementations fully compatible with neuromorphic hardware constraints (Korcsak-Gorzo et al., 26 Nov 2025, Millidge, 30 Dec 2025).
1. Mathematical Framework and Derivation
Local E-Prop emerges from a rigorous factorization of the BPTT or Real-Time Recurrent Learning (RTRL) gradient into two terms: (i) a forward-accumulated “eligibility trace” updated locally at each synapse, and (ii) a learning signal reflecting the error at the corresponding postsynaptic neuron and time step. Consider a generic RNN or SNN with recurrent weights , postsynaptic hidden state , and spike output . The total loss gradient is expressed as:
where (or a local approximation thereof) and is the eligibility trace for synapse at time .
The eligibility trace is constructed recursively:
with . This structure supports efficient forward-in-time updates with storage per synapse (Traub et al., 2020, Martín-Sánchez et al., 2022).
For ALIF neurons, the hidden state is two-dimensional (, ), and the corresponding eligibility vector evolves according to explicit equations detailed in (Korcsak-Gorzo et al., 26 Nov 2025).
2. Locality, Biological Plausibility, and Three-Factor Updates
Local E-Prop enforces strict spatial and temporal locality. Each synaptic update at depends solely on:
- The pre-synaptic activity (spike or rate)
- The post-synaptic hidden state and a surrogate gradient of the spike function ()
- A broadcasted learning signal , obtained from the error at the output layer and relayed to neuron via a fixed, random feedback matrix (“Broadcast Alignment”) (Korcsak-Gorzo et al., 26 Nov 2025, Wycoff et al., 2020)
This three-factor structure—the eligibility trace (pre post activity), local error, and modulatory signal—implements a paradigm compatible with biological theories of Hebbian plasticity modulated by neuromodulators and is sufficient to match the full STDP window in appropriately configured spiking neuron models (Traub et al., 2020).
3. Event-Driven and Sparse Implementations
Traditional E-Prop implementations performed synaptic updates at each simulation time step (“time-driven”), incurring cost per synapse per trial. Event-driven E-Prop, as introduced in (Korcsak-Gorzo et al., 26 Nov 2025), triggers updates strictly on presynaptic spike arrival at a synapse. Upon each event, the synapse retrieves archived postsynaptic and presynaptic state necessary for eligibility computation and applies the update instantaneously:
- Only active synapses process information, reducing computation by at least an order of magnitude ( updates skipped in very sparse networks)
- All updates remain strictly causal and fully local
- Enables scaling to millions of neurons and massive synapse counts on neuromorphic and HPC platforms
For large, sparse topologies (each neuron connects to – out of possible partners), event-driven E-Prop exhibits linear scaling with spike count and can be implemented efficiently in frameworks such as NEST using per-synapse FIFO history buffers and event handlers (Korcsak-Gorzo et al., 26 Nov 2025).
4. Generalization to Deep Networks and Model Classes
E-Prop was originally developed for single-layer recurrent SNNs and RNNs. The formalism has been extended to deep RNNs comprising multiple recurrent and feedforward layers. Given layers, the eligibility trace at depth is updated as follows:
where , , (Millidge, 30 Dec 2025). This recursion enables online credit assignment both across time and through network depth, facilitating accurate and local weight updates in arbitrarily deep architectures.
The eligibility-trace paradigm has also been adapted to architectures such as LSTMs (Hoyer et al., 2022), which require separate tracking for multiple gating variables and nonlinear cell states. For LSTM gates, eligibility traces propagate through forget gates, and extensions such as “trace echo” and bias initializations can enhance long-range temporal credit assignment.
5. Computational Complexity, Performance, and Comparison to BPTT and RTRL
Local E-Prop achieves computational and memory complexity matching BPTT in wall-clock terms but avoids its non-causal backward pass and high memory usage:
| Rule | Time (per trial) | Memory (per trial) | Locality |
|---|---|---|---|
| BPTT | Non-local, Non-causal | ||
| RTRL | Local, Causal but Infeasible | ||
| E-Prop | Local, Causal |
( is the number of synapses, the number of neurons, the sequence length) (Martín-Sánchez et al., 2022).
Empirically, local E-Prop closely matches BPTT with minor accuracy penalties on conventional tasks (e.g., sMNIST: BPTT reaches , E-Prop , but with appropriate extensions may outperform BPTT in long-delay cases (Hoyer et al., 2022)), and crucially, achieves similar internal dynamics and “neural similarity” to BPTT on neuroscience benchmarks (e.g., Procrustes, CKA, dPCA metrics) when matched for task performance (Liu et al., 7 Jun 2025).
Scalability results show event-driven e-prop enables near-linear scaling to 2 million neurons, and only modest overhead for plasticity compared to static synapse simulations (Korcsak-Gorzo et al., 26 Nov 2025).
6. Hardware Realizations and Neuromorphic Adaptations
Local E-Prop is well-suited for direct neuromorphic hardware implementation. Event-driven variants such as ETLP (Quintana et al., 2023) encode eligibility-trace update logic and learning signal broadcast via simple digital primitives (e.g., three clock cycles per update on an FPGA), requiring only local variables (pre-synaptic trace, post-synaptic membrane, local feedback). Dynamic energy consumption is minimal (sub-mW per neuron at 100 Hz update rates in modern FPGAs/ASICs), enabling real-time, on-chip adaptation for embedded intelligence. Bayesian variants of local E-Prop have also been proposed, using fixed random feedback weights and per-synapse variational inference, compatible with stochastic hardware and low-power streaming computation (Wycoff et al., 2020).
7. Limitations, Extensions, and Biological Considerations
The locality of E-Prop comes at the cost of discarding nonlocal error terms in the learning signal, which may limit learning for certain strongly recurrent tasks, although practical losses are often minor. Use of fixed random feedback via broadcast alignment sidesteps the biologically implausible requirement for symmetric weight transport. Extensions recover full spike-timing dependent plasticity (STDP) windows by augmenting spike reset and pseudo-derivative terms (Traub et al., 2020). Firing-rate regularization and homeostatic constraints can be incorporated locally. The formalism is compatible with further biologically-inspired mechanisms and generalizes across numerous neuron models (ALIF, LIF, Izhikevich, LSTM).
In summary, local E-Prop provides a tractable, scalable, and biologically plausible framework for online learning in recurrent neural systems, reconciling the efficiency and accuracy needs of modern SNN and RNN architectures with the constraints and mechanisms observed in biological brains (Korcsak-Gorzo et al., 26 Nov 2025, Millidge, 30 Dec 2025).