Ultra-Low-Power Analog Neuron
- Ultra-low-power analog neurons are circuits that mimic biological neuron dynamics using subthreshold CMOS and compact capacitors to achieve energy per spike in the femtojoule to picojoule range.
- They employ innovations like power-gating, dynamic adaptation, and level-crossing ADC configurations to optimize performance and enable high-speed, low-voltage operations.
- These neurons integrate into neuromorphic systems, edge sensors, and memory-in-compute platforms, balancing trade-offs between accuracy, speed, and power dissipation.
An ultra-low-power analog neuron is a circuit-level abstraction that emulates the essential biophysics of biological neurons while achieving multi-femtojoule (fJ) to sub-picojoule (pJ) energy-per-event. These neurons are implemented in advanced CMOS, FDSOI, or post-CMOS processes and are optimized for neuromorphic computing, edge sensory interfaces, event-driven ADCs, and memory-in-compute architectures. Design strategies exploit subthreshold operation, minimal active device counts, compact capacitive elements, and circuit innovations such as power-gating, dynamic adaptation, and event-driven resets to suppress static and dynamic energy dissipation while retaining flexible neural functionality. Exemplars include leaky integrate-and-fire (LIF) neurons in advanced nodes, analog-mixed-signal circuits, spintronic and ferroelectric hybrids, time-based integrate-and-fire, and level-crossing-based neuron-ADC front-ends.
1. Fundamental Circuit Principles and Energy Efficiency
Ultra-low-power analog neurons systematically minimize both static and dynamic energy through architectural simplicity and device-level exploitation of subthreshold MOSFET operation. The canonical LIF neuron is implemented as a small current mirror driving a capacitance of a few femtofarads, with energy per spike determined by the voltage swing and membrane capacitance :
The 28 nm TSMC neuron achieves at and . Currents are in the pA–nA range, and static leakage is suppressed by careful sizing, biasing, and occasionally by body-bias or power-gating (Besrour et al., 14 Aug 2024).
Supply voltages as low as 250 mV () are supported in optimized topologies, enabled by low-threshold devices and careful noise margin budgeting. Device count is minimized (≤8 MOSFETs per neuron), and compact capacitors exploit advanced layout (MOM/fringe).
Energy per conversion or operation in specialized ADC or mixed-signal neural front-ends drops to sub-100 fJ/conversion at effective numbers of bits (ENOB) ≳6.5, as shown in level-crossing sampled architectures (Chen et al., 2022). Mixed-signal MAC blocks realize energy per operation in the attojoule regime (Chatterjee et al., 2018).
2. Topological and Architectural Variants
2.1 Subthreshold CMOS LIF Neuron
A basic LIF neuron is composed of a current-mirror integrator, subthreshold inverters as threshold comparators (spike generator), and a reset block. Subthreshold operation ensures exponential I–V characteristics:
Reset and refractory are implemented via coupling capacitors and additional switches, with spike times as fast as demonstrated at on 28 nm silicon (Besrour et al., 14 Aug 2024), supporting high-speed SNNs.
2.2 Adapting for Level-Crossing and Reconfigurable ADC Functionality
The Neuron-ADC leverages a level-crossing sampler and bio-inspired refractory to achieve event-driven, data-compressing conversion. Critical components are:
- Two dynamic comparators (with 3-stage cascaded PMOS input differential amplifiers and tail-power-gating) determine “up” and “down” crossings.
- A refractory circuit based on a PMOS common-source stage and NMOS discharge load generates adjustable dead-times, tunable by .
- Digital logic for reset and folding.
The architecture enables dynamic selection between low-power/sparse operation and high-accuracy/dense conversion by modulating the refractory period:
A figure-of-merit (FoM) of and up to $6.9$ ENOB is obtained at $0.6$ V (Chen et al., 2022). Power-gating comparators reduce static comparator drain by up to at .
2.3 Time-Domain and Phase-Based Neurons
Time-domain analog neurons eschew voltage-mode integration in favor of phase or timing-based summation and thresholding. Weighted-sum integration is transformed to a first-crossing output time, as in TACT (time analog computation) neurons, which have demonstrated in macroscale simulation (Wang et al., 2018).
A representative circuit employs NMOS gating, current-controlled oscillators, and digital counters for “reset by subtraction,” guaranteeing high linearity (error ), ultra-low power ( per neuron), and whole-network energy/inference below at error on MNIST (Song et al., 2022).
3. Device and Process Innovations
3.1 Advanced SOI and FDSOI Techniques
Utilizing FDSOI (Fully Depleted SOI) technology, body-biasing allows dynamic adjustment of threshold voltages and leakage suppression. Self-cascoding in current mirrors stabilizes sub-pA currents, and large APMOM capacitors implement slow, biophysically-realistic time constants:
Tunable by sweeping from down to , time constants reach seconds, with energy per spike from at down to at (Rubino et al., 2020).
3.2 Integration with Emerging Memory Devices
Hybrid ferroelectric tunnel junction (FTJ)-CMOS neurons realize the integrate-and-fire operation with capacitive elements of $1$–, see spike energies of $2$–, and exhibit non-volatile membrane state retention, enabling duty-cycled operation and minimizing standby (Gibertini et al., 2022). Electrical tuning is possible via coercive field and inverter threshold adjustments.
Memristive and spintronic domains—e.g., SOT-MRAM-based neurons or domain wall magnets—enable further reductions in energy/area, analog-friendly sigmoid/soft-limiting transfer, and direct crossbar integration. SOT-MRAM neurons achieve per activation, area and are fully compatible with analog IMC macros (Amin et al., 2022).
4. Noise, Fidelity, and Operation Under Variability
Thermal and flicker noise, device mismatch, and PVT variations are central to the behavior and energy scaling of ultra-low-power analog neurons. Subthreshold MOSFETs introduce -scale leakage and $1/f$ noise; design must ensure sufficient headroom for distinguishability of spike events. Monte-Carlo simulation of neuron firing rate variability at yields a CV of (Rubino et al., 2020).
The intrinsic threshold for spiking may be quantified as an internal membrane potential that is invariant to pulse shape; statistical analysis shows that deterministic thresholds bifurcate into distributions under strong noise, impacting spike-timing reliability and first-passage metrics (Brandt et al., 16 Nov 2025).
Neural networks and SNNs are highly error-resilient: injected non-idealities up to integrated noise or of device mismatch affect MNIST/CIFAR10 accuracy by less than (worst-case) (Chatterjee et al., 2018).
5. System Integration, Scalability, and Trade-Offs
Ultra-low-power analog neurons function as core primitives in:
- Neuromorphic edge SoCs and event-driven sensory front-ends (Besrour et al., 14 Aug 2024)
- Bio-signal compressors (e.g., EEG, ECG) (Chen et al., 2022)
- Memory-in-compute platforms leveraging eNVMs and memristive crossbars (Palhares et al., 2023, Amin et al., 2022)
- Associative memories and non-Boolean computation using spin and ferroelectric physics (Sharad et al., 2013, Gibertini et al., 2022)
Trade-offs are central:
- Sampling vs. Power: Shortened refractory improves accuracy and bandwidth at the cost of dynamic power; decreasing yields fewer, lower-power events.
- Membrane Capacitance vs. Fan-In: Larger supports greater fan-in but at the expense of slower operation and increased area.
- Supply Voltage: Lowering quadratically reduces , but at the risk of noise margin collapse; threshold-tuned devices, body bias, and tailored device flavors (FDSOI) are needed.
Tabulated summary of recent ultra-low-power analog neuron metrics:
| Work | Node | Area () | Max | |
|---|---|---|---|---|
| (Besrour et al., 14 Aug 2024) | 28 nm | 1.61 fJ | 34 | 300 kHz |
| (Chen et al., 2022) (ADC) | 40 nm | 97 fJ/conversion | N/A | kHz |
| (Wang et al., 2018) (TACT) | 250 nm | 6.9 fJ/op | PoC | TOPS/W |
| (Amin et al., 2022) (MRAM) | 14 nm | 72 fJ | 0.138 | 250 MHz |
| (Rubino et al., 2020) (FDSOI) | 22 nm | 1–16 pJ | 1000 | 2 kHz |
6. Application Domains and Outlook
Ultra-low-power analog neurons are foundational for event-based signal encoding, biomedical interfaces, edge inference, analog-to-digital front-ends, and co-located memory-compute platforms. Demonstrated integration in SNNs with surrogate gradient training yields competitive accuracy (e.g., 82.5% on MNIST post 4-bit quantization in (Besrour et al., 14 Aug 2024)), validating their applicability.
Future directions include integration with eNVM synapses, advanced 3D die stacking, photonic or multi-physics hybrid neuron implementations, and sub-100 mV supply operation. Further advances in device matching, noise mitigation, crosstalk suppression, and dynamic adaptation will enable even lower energy budgets and increased scalability for emerging neuromorphic processors.