Papers
Topics
Authors
Recent
Search
2000 character limit reached

Analog Computing Implementation

Updated 5 February 2026
  • Analog computing is based on using continuous physical variables to perform operations like integration and differentiation with high energy efficiency.
  • Innovative platforms such as CMOS VLSI, photonic circuits, and ReRAM arrays enable scalable, reconfigurable hardware with up to 10³× speedup in solving PDEs/ODEs.
  • System architectures incorporate dynamic calibration and error correction to mitigate noise and mismatch, ensuring robust, low-power computations.

Analog computing implementation is the realization of computation directly in the physical domain, using continuous variables such as electrical voltage, current, charge, optical field intensity, or material properties. This paradigm leverages the native physics of devices and systems—whether electrical, photonic, or quantum—to perform mathematical operations such as integration, differentiation, vector-matrix multiplication, or solution of differential equations, often with significantly higher energy efficiency, speed, and functional density compared to digital approaches when low-to-moderate precision suffices. Contemporary research spans technology platforms including CMOS analog VLSI, emerging nonvolatile memories, MEMS, photonic, plasmonic, and metatronic devices, and focuses both on new hardware primitives and system-level architectures for scalable, reconfigurable, and robust computation.

1. Fundamental Physical and Circuit Principles

Analog computing exploits the continuous response of physical systems, directly mapping mathematical operations to device or circuit behaviors. For example, the voltage across a capacitor in an op-amp-based integrator can realize mathematical integration; the output current of a subthreshold MOS transistor implements an exponential function of its gate voltage; and the displacement current in an epsilon-near-zero (ENZ) photonic mesh reflects spatial Laplacian operators (Miscuglio et al., 2020, Köppel et al., 2021).

Key analog circuit elements include:

  • Summers and integrators: Op-amp circuits with resistor/capacitor feedback realize weighted addition and time integration.
  • Multipliers/nonlinearities: Four-quadrant Gilbert cells or translinear loops generate products and exponentials; diode-connected MOS devices can approximate max, min, log-sum-exp, and other nonlinearities (Kumar et al., 2022).
  • Memristive/MRAM/ReRAM arrays: Conductance‐modulated crossbars carry out in-memory matrix–vector multiplication by summing currents along bitlines.
  • Photonic/Plasmonic/Metatronic modules: In photonics, Mach–Zehnder interferometers, microring resonators, and metatronic ENZ meshes implement Fourier-domain filtering, differentiation, or PDE emulation (e.g., solving ∇²u=0 in the optical domain) (Dong et al., 2014, Pors et al., 2016, Miscuglio et al., 2020).

The mapping of abstract computation to hardware is underpinned by the mathematical isomorphism between certain circuit topologies and differential/integral equations. For example, N-body ODEs for molecular dynamics can be cast as networks of op-amp integrators, summers, multipliers, and programmable potentiometers (Köppel et al., 2021).

2. Technology Platforms and Device-Level Implementations

Analog computing is implemented on a variety of material and device platforms, optimized for specific energy, speed, area, and precision regimes:

  • CMOS analog VLSI: MOSFET-based circuits in weak/moderate inversion enable ultra-low-power operations, high density, and robustness to process scaling (from 180 nm down to 7 nm) (Kumar et al., 2022). The critical building blocks include dense integrators, summers, and nonlinearity generators, with temperature and mismatch resilience engineered either by margin-propagation or shape-based analog computing (S-AC) circuits.
  • Emerging nonvolatile memory (NVM): ReRAM, PCM, and MRAM form dense 2D crossbar arrays for in-memory computing, enabling massively parallel vector–matrix multiplication with per-weight areas below 1 μm² and energy per operation down to sub-femtojoule (Liu et al., 2021, Dang et al., 2024, Amin et al., 2022). Novel analog sigmoidal neurons using SOT-MRAM chains plus CMOS inverters achieve analog activations within 0.14 μm² and <20 μW (Amin et al., 2022). ReRAM stochasticity is harnessed to realize analog sigmoid and SoftMax functions, eliminating ADCs/DACs and further reducing energy/area (Dang et al., 2024).
  • Photonic and plasmonic circuits: Integrated silicon photonic MZIs and microring resonators are tailored to perform frequency-domain operations corresponding to mathematical differentiation, integration, and ODE solving (Dong et al., 2014). Reflective plasmonic metasurfaces constructed from gold–dielectric–gold stacks with GSP resonances implement spatial analog operations such as edge detection (Pors et al., 2016). Metatronic circuits, based on ENZ materials (e.g., ITO), employ lumped element analogy to solve PDEs using displacement current balance with ultra-fast (ps-scale) optical solutions (Miscuglio et al., 2020).
  • Neuromorphic and reservoir computers: Continuous-time analog neuromorphic systems, such as BrainScaleS-2, use subthreshold MOS and programmable conductances to emulate spiking neuron and synapse dynamics. Time-domain analog spiking neurons based on dual VCOs and CMOS deliver reservoir computing within a compact, scalable CMOS implementation (Schemmel et al., 2020, Kimura et al., 2024, Smerieri et al., 2012, Duport et al., 2014).

3. Reconfigurable Architectures and System Programming

Programmability and reconfigurability in analog computers are achieved through electronically controlled switch networks (“autopatch” matrices) and programmable coefficient arrays:

  • Switching fabrics: Full crossbar, Clos/Benes, multiplexer/demultiplexer, and MEMS-based matrices under digital control connect building blocks (summers, integrators, multipliers) for fast reconfiguration (sub-ms) (Ulmann, 29 Oct 2025).
  • Coefficient programming: Multiplying DACs, gate-controlled analog memories (floating-gate arrays (Bayat et al., 2014), SOT-MRAM (Amin et al., 2022)), and digital potentiometers set the weights and algorithmic parameters.
  • Toolchain: High-level model descriptions are compiled into bitstreams that configure the autopatch and coefficients. Interfaces support SPI, I²C, parallel bus, and software APIs in C/Python/MATLAB (Ulmann, 29 Oct 2025).
  • Calibration and error correction: Dynamic calibration and static trimming correct for switch ON-resistance, coefficient drift, and mismatch. Device-level feedback (e.g., closed-loop floating-gate tuning, on-chip translinear reference) enhances linearity, precision, and dynamic range (Bayat et al., 2014).

Trade-offs inherent to switch implementation include ON-resistance (causing gain errors), parasitic capacitance (causing crosstalk), and switching speed, which are mitigated by parallel loading, sparse updates, and MEMS integration.

4. Application Domains and Workflow Paradigms

Modern analog computing implementation is focused on compute-intensive tasks with latency, energy, or density constraints tolerating reduced precision:

  • PDE/ODE solvers: Analog mesh computers, op-amp-based integrator arrays, and photonic/meta­tronic meshes are used for scientific computing, including fluid dynamics, circuit emulation, and computational chemistry, offering up to 10³× speedup over digital solvers in bulk parallel workflows (Köppel et al., 2021, Köppel et al., 2021, Miscuglio et al., 2020).
  • Machine learning and neural inference: Analog in-memory computing with crossbar arrays, analog nonlinearity (sigmoid/tanh), and S-AC circuits support training and inference for MLPs, CNNs, and SNNs, achieving >10 TOPS/W and per-inference energy in the 10 μJ to sub-fJ range (Liu et al., 2021, Kumar et al., 2022, Amin et al., 2022).
  • Temporal and pattern-processing: Reservoir computers with analog input/output layers (fully photonic or hybrid optoelectronic) perform ultrafast real-time sequence processing, overcoming prior digital bottlenecks and enabling symbol rates >100 kHz (Smerieri et al., 2012, Duport et al., 2014).
  • Edge AI and low-power sensors: Analog circuits are used in always-on inference engines for battery-powered IoT nodes, leveraging ultra-low-noise subthreshold operation (Liu et al., 2021).
  • Security primitives and hardware-aware co-optimization: Dedicated analog primitives, such as transmission-line hardware PUFs and oscillator-based recognizers, are co-designed with system optimization frameworks that account for noise, mismatch, discrete parameterization and area/energy constraints, e.g., via differentiable ODE modeling as in Shem (Wang et al., 2024).

5. Precision, Robustness, and Scaling Considerations

Analog implementations must address noise, mismatch, temperature variation, and dynamic range limitations:

  • Precision/fidelity: Analog circuits typically support 4–8 bits of effective precision, limited by thermal noise (kT/C), device mismatch (1–5%), and nonlinearity. Calibration and algorithmic compensation (device-aware training, mixed-signal folding) are used to close the gap with digital baselines (Liu et al., 2021, Kumar et al., 2022).
  • Scaling: Process migration (e.g., from 180 nm to 7 nm) is supported by S-AC cell design (identical topologies, adjusted bias currents, and references), maintaining energy/area scaling and input/output characteristics (Kumar et al., 2022). Non-volatile memories and compact MRAM neurons provide high density and efficient analog-to-analog signal chaining without ADC/DAC overhead (Amin et al., 2022, Dang et al., 2024).
  • Temperature and mismatch compensation: Circuit designs exploit monotonic device physics and sum-node feedback to suppress temperature dependencies, and Monte Carlo simulations guide robust parameter selection (Kumar et al., 2022, Wang et al., 2024).
  • Error correction and noise exploitation: Stochasticity in devices (e.g., ReRAM noise) is used constructively for stochastic activation functions, eliminating explicit computation of nonlinearities; error correction for correlated noise and crosstalk is handled at both the circuit and algorithmic level (Dang et al., 2024, Bayat et al., 2014).

6. Benchmarking, Quantitative Metrics, and Prospects

Quantitative performance is characterized by energy/operation, area density, computational throughput, system-level accuracy, and latency:

  • Energy/operation: State-of-the-art analog CiMs achieve energy per MAC ∼0.1 pJ–1 pJ, analog crossbar arrays with ADC/DAC removal reach sub-femtojoule energy/activation, and SOT-MRAM neuron activations are realized at 3.6 fJ (Kumar et al., 2022, Amin et al., 2022, Dang et al., 2024).
  • Throughput and density: Hundreds of Tera-OPS/W are reported for analog MVM arrays; integration densities >10⁶ nodes/mm² are feasible for metatronic and photonic platforms (Liu et al., 2021, Miscuglio et al., 2020).
  • Accuracy: For neural inference (MNIST/MLP), analog hardware matches digital baselines within 1–2% given system-aware calibration and training, e.g., 96.7% accuracy with stochastic ReRAM activations after 20 trials (Dang et al., 2024, Kumar et al., 2022).
  • Latency: Real-time photonic reservoir computers and metatronic PDE solvers operate at ps-to-μs timescales, outpacing digital processing by ~10⁵× in certain regimes (Smerieri et al., 2012, Miscuglio et al., 2020).

Prospects include increased integration of analog–digital co-processor systems, enhanced reconfigurability, extension to complex spatiotemporal tasks in scientific computing and AI, and systematic hardware–software co-optimization frameworks (e.g., Shem) accommodating both physical constraints and computational objectives (Wang et al., 2024). The principal ongoing challenges are further suppression of nonidealities, seamless scaling to large problem sizes, integration with mature digital toolchains, and development of standardized benchmark methodologies.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Analog Computing Implementation.