Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantum LIF Neurons

Updated 14 February 2026
  • Quantum LIF neurons are stochastic two-state units with discrete spiking dynamics governed by a bandgap mechanism analogous to quantum energy transitions.
  • They map neural spiking probabilities via logistic functions and effective temperature, integrating thermodynamics with Boltzmann sampling for deep belief networks.
  • Their implementation in digital and analog neuromorphic hardware enables energy-efficient pattern recognition, on-chip learning, and time-series prediction through event-driven contrastive divergence.

Quantum leaky integrate-and-fire (LIF) neurons represent an extension of classical LIF models to describe discrete state transitions, stochasticity, and rhythmic synchrony in spiking networks, particularly as realized in Boltzmann-type neural architectures. They bridge statistical mechanics and neural computation, enabling the mapping of neural spiking probabilities to thermodynamic temperature and logistic activation functions, a critical theoretical and practical framework for stochastic neural sampling, deep belief networks, and neuromorphic hardware implementations (Merolla et al., 2010, Das et al., 2015, Neftci et al., 2013, Osogami et al., 2015, Osogami, 2016).

1. Fundamental Neuron Model and Bandgap Formalism

The quantum LIF neuron functions as a two-state stochastic unit within a rhythmic, clocked network. Membrane dynamics are governed by

τmu˙(t)=u0u(t)+R[Iin(t)Iα(t)],\tau_m\,\dot u(t) = u_0 - u(t) + R [ I_{\rm in}(t) - I_{\alpha}(t) ],

with an adaptation current

ταI˙α(t)=Iα(t)+Δαδ(tt(f)),\tau_\alpha\,\dot I_\alpha(t) = -I_\alpha(t) + \Delta_\alpha\,\delta(t-t^{(f)}),

where u(t)u(t) is the membrane potential, reset to a baseline u0u_0 upon crossing threshold utu_t. Neural state in a given window of duration TWT_W is binary (si=1s_i = 1 if at least one spike, else si=0s_i = 0). The "bandgap" Δ=utRI0\Delta = u_t - R I_0 sets the subthreshold gap and maps to the energy gap in quantum two-state systems (Merolla et al., 2010).

The per-window firing probability adopts a sigmoidal form:

Pspk(μ)11+exp(Δ/T),P_{\text{spk}}(\mu) \approx \frac{1}{1 + \exp(-\Delta/T)},

with effective neural temperature TT induced by synaptic and external Poisson noise (Merolla et al., 2010).

2. Mapping Stochastic Dynamics to Thermodynamics

Poisson-distributed excitatory and inhibitory input drive the effective temperature felt by the neuron. Under the "fast-membrane" approximation and Ornstein-Uhlenbeck input statistics, the input mean μ\mu and variance σ2\sigma^2 are

μ=RI0+τm(λEweλIwi),σ2=τm(λEwe2+λIwi2),\mu = R I_0 + \tau_m (\lambda_E w_e - \lambda_I w_i), \qquad \sigma^2 = \tau_m (\lambda_E w_e^2 + \lambda_I w_i^2),

where λE,we\lambda_E, w_e and λI,wi\lambda_I, w_i parameterize input spike rates and weights. The resulting stochasticity directly tunes the slope of the neuron's logistic response, implementing a natural Boltzmann distribution at temperature TT (Merolla et al., 2010).

3. Synchronous Clocking and Gibbs Sampling

Global inhibitory rhythms impose a discrete, synchronous epoch structure over the neural network. During "integration" phases (low global inhibition), neuronal dynamics evolve freely and spikes accrue. During "reset" phases (high inhibition), neural states are reset, enforcing discrete-time updates across the population in a perfect analogy with Markov Chain Monte Carlo (MCMC) and Gibbs sampling, as realized both in theoretical modeling and neuromorphic VLSI hardware (Merolla et al., 2010, Das et al., 2015, Neftci et al., 2013).

The state update protocol for an NN-neuron Boltzmann machine is:

  • Integrate synaptic input: xi=jwijsj(prev)x_i = \sum_j w_{ij} s_j(\text{prev}).
  • Inject proportional current and Poisson noise for a time window TWT_W.
  • Record neuron as si=1s_i=1 if it spikes at least once in window, else si=0s_i=0.
  • Apply global inhibitory reset.

This process implements the Boltzmann-Gibbs distribution:

P(s)exp(E(s)/T),E(s)=12ijwijsisj.P(\mathbf{s}) \propto \exp\left(-E(\mathbf{s})/T\right), \quad E(\mathbf{s}) = -\frac{1}{2}\sum_{i\neq j} w_{ij} s_i s_j.

4. Digital and Analog Hardware Realizations

Digital LIF neurons with programmable stochastic leak and threshold allow efficient mapping of the quantum LIF framework onto neuromorphic hardware, such as IBM TrueNorth. Here, membrane potential updates:

Vj(t)=Vj(t1)+ixi(t)sijλj(t),V_j(t) = V_j(t-1) + \sum_i x_i(t) s_{ij} - \lambda_j(t),

with stochastic leak λj(t)\lambda_j(t) and stochastic threshold αj(t)\alpha_j(t), yield the neural spike response matched to a logistic function in the weight summation domain. On-chip PRNGs generate binary and uniform randomness, enabling physical realization of logistic sampling at ultra-low power (Das et al., 2015).

Hardware instantiation preserves theoretical performance, achieving classification and sampling metrics matching software realization for tasks such as MNIST RBM inference and generative modeling (Das et al., 2015).

5. Event-Driven Contrastive Divergence and STDP Learning

Quantum LIF/Boltzmann networks can be trained by event-driven contrastive divergence (eCD), a biologically plausible STDP-based weight update scheme. Learning alternates between clamped (data) and free-running (reconstruction) phases, with STDP windows realizing Hebbian potentiation and depression according to spike-timing of pre- and post-synaptic neurons (Neftci et al., 2013).

The update rule—averaged over positive and negative phases—recovers the traditional contrastive divergence gradient:

ΔWijvihjdatavihjmodel,\Delta W_{ij} \propto \langle v_i h_j \rangle_{\text{data}} - \langle v_i h_j \rangle_{\text{model}},

implemented through modulated, pairwise STDP.

6. Dynamic Boltzmann Machines and Time-Series Models

Dynamic Boltzmann Machines (DyBM) generalize the quantum LIF framework to structured, temporal probabilistic models. The DyBM describes a chain of binary-valued spike states with feedforward connections from past to present layers, with the conditional distribution for the present layer factorizing as

P(x[t]x[:t1])=jσ(Ej(1history)),P(\mathbf{x}^{[t]}|\mathbf{x}^{[:t-1]}) = \prod_j \sigma(-E_j(1|\text{history})),

where the "energy" for neuron jj aggregates biases, recent spikes, and exponentially weighted eligibility traces (Osogami et al., 2015, Osogami, 2016).

Learning optimizes the conditional likelihood, with STDP-like plasticity: eligibility traces update synaptic weights for temporally correlated spikes, capturing both long-term potentiation (pre-before-post) and long-term depression (post-before-pre).

This structure supports a direct logistic regression interpretation for binary time-series prediction and extends to Gaussian DyBM for real-valued data, equivalently functioning as a vector autoregressive model enhanced with eligibility traces for long-term dependencies (Osogami, 2016).

7. Practical Implications and Extensions

Quantum LIF neurons and Boltzmann frameworks enable hardware-efficient, stochastic neural computation with direct correspondence to thermodynamic models, Markov sampling, and biologically plausible learning rules. Layer stacking, deep network construction, and adaptation to real-valued data are natural extensions. Event-driven learning, local eligibility dynamics, and global rhythmic control support robust scaling to neuromorphic platforms (Merolla et al., 2010, Das et al., 2015, Osogami et al., 2015, Osogami, 2016).

A plausible implication is that quantum LIF mechanisms provide a unifying substrate for implementing classical and dynamic Boltzmann machines, supporting both principled learning and hardware-level efficiency with explicit stochasticity and synchrony. Experimental evaluations confirm the convergence, predictive improvement over VAR for time-series, and suitability for pattern generation and recognition tasks.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quantum Leaky Integrate-and-Fire Neurons.