Papers
Topics
Authors
Recent
2000 character limit reached

Ultra-Low-Power Analog Neuron

Updated 23 November 2025
  • Ultra-low-power analog neurons are circuits that mimic biological neuron dynamics using subthreshold CMOS and compact capacitors to achieve energy per spike in the femtojoule to picojoule range.
  • They employ innovations like power-gating, dynamic adaptation, and level-crossing ADC configurations to optimize performance and enable high-speed, low-voltage operations.
  • These neurons integrate into neuromorphic systems, edge sensors, and memory-in-compute platforms, balancing trade-offs between accuracy, speed, and power dissipation.

An ultra-low-power analog neuron is a circuit-level abstraction that emulates the essential biophysics of biological neurons while achieving multi-femtojoule (fJ) to sub-picojoule (pJ) energy-per-event. These neurons are implemented in advanced CMOS, FDSOI, or post-CMOS processes and are optimized for neuromorphic computing, edge sensory interfaces, event-driven ADCs, and memory-in-compute architectures. Design strategies exploit subthreshold operation, minimal active device counts, compact capacitive elements, and circuit innovations such as power-gating, dynamic adaptation, and event-driven resets to suppress static and dynamic energy dissipation while retaining flexible neural functionality. Exemplars include leaky integrate-and-fire (LIF) neurons in advanced nodes, analog-mixed-signal circuits, spintronic and ferroelectric hybrids, time-based integrate-and-fire, and level-crossing-based neuron-ADC front-ends.

1. Fundamental Circuit Principles and Energy Efficiency

Ultra-low-power analog neurons systematically minimize both static and dynamic energy through architectural simplicity and device-level exploitation of subthreshold MOSFET operation. The canonical LIF neuron is implemented as a small current mirror driving a capacitance of a few femtofarads, with energy per spike determined by the voltage swing ΔV\Delta V and membrane capacitance CmemC_{\mathrm{mem}}:

EspikeCmemΔV2E_{\mathrm{spike}} \approx C_{\mathrm{mem}}\, \Delta V^2

The 28 nm TSMC neuron achieves Espike=1.61fJE_{\mathrm{spike}} = 1.61\,\mathrm{fJ} at Cmem3.47fFC_{\mathrm{mem}} \approx 3.47\,\mathrm{fF} and ΔV50mV\Delta V \sim 50\,\mathrm{mV}. Currents are in the pA–nA range, and static leakage is suppressed by careful sizing, biasing, and occasionally by body-bias or power-gating (Besrour et al., 14 Aug 2024).

Supply voltages as low as 250 mV (VDDV_{\mathrm{DD}}) are supported in optimized topologies, enabled by low-threshold devices and careful noise margin budgeting. Device count is minimized (≤8 MOSFETs per neuron), and compact capacitors exploit advanced layout (MOM/fringe).

Energy per conversion or operation in specialized ADC or mixed-signal neural front-ends drops to sub-100 fJ/conversion at effective numbers of bits (ENOB) ≳6.5, as shown in level-crossing sampled architectures (Chen et al., 2022). Mixed-signal MAC blocks realize energy per operation in the attojoule regime (Chatterjee et al., 2018).

2. Topological and Architectural Variants

2.1 Subthreshold CMOS LIF Neuron

A basic LIF neuron is composed of a current-mirror integrator, subthreshold inverters as threshold comparators (spike generator), and a reset block. Subthreshold operation ensures exponential I–V characteristics:

ID=I0eq(VGSVth)nkBTI_D = I_0\,e^{\frac{q(V_{GS}-V_{th})}{n\,k_BT}}

Reset and refractory are implemented via coupling capacitors and additional switches, with spike times as fast as fmax=300kHzf_{\text{max}} = 300\,\mathrm{kHz} demonstrated at 250mV250\,\mathrm{mV} on 28 nm silicon (Besrour et al., 14 Aug 2024), supporting high-speed SNNs.

2.2 Adapting for Level-Crossing and Reconfigurable ADC Functionality

The Neuron-ADC leverages a level-crossing sampler and bio-inspired refractory to achieve event-driven, data-compressing conversion. Critical components are:

  • Two dynamic comparators (with 3-stage cascaded PMOS input differential amplifiers and tail-power-gating) determine “up” and “down” crossings.
  • A refractory circuit based on a PMOS common-source stage and NMOS discharge load generates adjustable dead-times, tunable by VREFV_{\text{REF}}.
  • Digital logic for reset and folding.

The architecture enables dynamic selection between low-power/sparse operation and high-accuracy/dense conversion by modulating the refractory period:

FoM=Power2BW2ENOBFoM = \frac{\text{Power}}{2 \cdot BW \cdot 2^{ENOB}}

A figure-of-merit (FoM) of 97fJ/conversion97\,\mathrm{fJ/conversion} and up to $6.9$ ENOB is obtained at $0.6$ V (Chen et al., 2022). Power-gating comparators reduce static comparator drain by up to 41.1%41.1\,\% at 10kHz10\,\mathrm{kHz}.

2.3 Time-Domain and Phase-Based Neurons

Time-domain analog neurons eschew voltage-mode integration in favor of phase or timing-based summation and thresholding. Weighted-sum integration is transformed to a first-crossing output time, as in TACT (time analog computation) neurons, which have demonstrated >290TOPS/W>290\,\mathrm{TOPS}/\mathrm{W} in macroscale simulation (Wang et al., 2018).

A representative circuit employs NMOS gating, current-controlled oscillators, and digital counters for “reset by subtraction,” guaranteeing high linearity (error <1%<1\%), ultra-low power (0.23μW0.23\,\mu\mathrm{W} per neuron), and whole-network energy/inference below 3.72nJ3.72\,\mathrm{nJ} at 1%1\% error on MNIST (Song et al., 2022).

3. Device and Process Innovations

3.1 Advanced SOI and FDSOI Techniques

Utilizing FDSOI (Fully Depleted SOI) technology, body-biasing allows dynamic adjustment of threshold voltages and leakage suppression. Self-cascoding in current mirrors stabilizes sub-pA currents, and large APMOM capacitors implement slow, biophysically-realistic time constants:

τleak=CmemIleak\tau_{leak} = \frac{C_{\text{mem}}}{I_{\text{leak}}}

Tunable by sweeping IτI_\tau from 500fA500\,\mathrm{fA} down to 1fA1\,\mathrm{fA}, time constants reach seconds, with energy per spike from 16pJ\sim 16\,\mathrm{pJ} at 30Hz30\,\mathrm{Hz} down to 1pJ\sim 1\,\mathrm{pJ} at 2kHz2\,\mathrm{kHz} (Rubino et al., 2020).

3.2 Integration with Emerging Memory Devices

Hybrid ferroelectric tunnel junction (FTJ)-CMOS neurons realize the integrate-and-fire operation with capacitive elements of $1$–10fF10\,\mathrm{fF}, see spike energies of $2$–10fJ10\,\mathrm{fJ}, and exhibit non-volatile membrane state retention, enabling duty-cycled operation and minimizing standby (Gibertini et al., 2022). Electrical tuning is possible via coercive field and inverter threshold adjustments.

Memristive and spintronic domains—e.g., SOT-MRAM-based neurons or domain wall magnets—enable further reductions in energy/area, analog-friendly sigmoid/soft-limiting transfer, and direct crossbar integration. SOT-MRAM neurons achieve 72fJ72\,\mathrm{fJ} per activation, 0.138μm20.138\,\mu\mathrm{m}^2 area and are fully compatible with analog IMC macros (Amin et al., 2022).

4. Noise, Fidelity, and Operation Under Variability

Thermal and flicker noise, device mismatch, and PVT variations are central to the behavior and energy scaling of ultra-low-power analog neurons. Subthreshold MOSFETs introduce fA\mathrm{fA}-scale leakage and $1/f$ noise; design must ensure sufficient headroom for distinguishability of spike events. Monte-Carlo simulation of neuron firing rate variability at 70Hz70\,\mathrm{Hz} yields a CV of 13%13\% (Rubino et al., 2020).

The intrinsic threshold for spiking may be quantified as an internal membrane potential VthV_{\mathrm{th}} that is invariant to pulse shape; statistical analysis shows that deterministic thresholds bifurcate into distributions under strong noise, impacting spike-timing reliability and first-passage metrics (Brandt et al., 16 Nov 2025).

Neural networks and SNNs are highly error-resilient: injected non-idealities up to 0.1μV20.1\,\mu\mathrm{V}^2 integrated noise or ±3σ±3\sigma of device mismatch affect MNIST/CIFAR10 accuracy by less than 2.1%2.1\% (worst-case) (Chatterjee et al., 2018).

5. System Integration, Scalability, and Trade-Offs

Ultra-low-power analog neurons function as core primitives in:

Trade-offs are central:

  • Sampling vs. Power: Shortened refractory improves accuracy and bandwidth at the cost of dynamic power; decreasing VREFV_{\mathrm{REF}} yields fewer, lower-power events.
  • Membrane Capacitance vs. Fan-In: Larger CmC_m supports greater fan-in but at the expense of slower operation and increased area.
  • Supply Voltage: Lowering VDDV_{\mathrm{DD}} quadratically reduces EspikeE_{\mathrm{spike}}, but at the risk of noise margin collapse; threshold-tuned devices, body bias, and tailored device flavors (FDSOI) are needed.

Tabulated summary of recent ultra-low-power analog neuron metrics:

Work Node EspikeE_{\mathrm{spike}} Area (μm2\mu m^2) Max ff
(Besrour et al., 14 Aug 2024) 28 nm 1.61 fJ 34 300 kHz
(Chen et al., 2022) (ADC) 40 nm 97 fJ/conversion N/A >10>10 kHz
(Wang et al., 2018) (TACT) 250 nm \sim6.9 fJ/op PoC >290>290 TOPS/W
(Amin et al., 2022) (MRAM) 14 nm 72 fJ 0.138 250 MHz
(Rubino et al., 2020) (FDSOI) 22 nm 1–16 pJ \sim1000 2 kHz

6. Application Domains and Outlook

Ultra-low-power analog neurons are foundational for event-based signal encoding, biomedical interfaces, edge inference, analog-to-digital front-ends, and co-located memory-compute platforms. Demonstrated integration in SNNs with surrogate gradient training yields competitive accuracy (e.g., 82.5% on MNIST post 4-bit quantization in (Besrour et al., 14 Aug 2024)), validating their applicability.

Future directions include integration with eNVM synapses, advanced 3D die stacking, photonic or multi-physics hybrid neuron implementations, and sub-100 mV supply operation. Further advances in device matching, noise mitigation, crosstalk suppression, and dynamic adaptation will enable even lower energy budgets and increased scalability for emerging neuromorphic processors.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Ultra-Low-Power Analog Neuron.